1. Anthony Lewis on Stephen Breyer:
http://www.nybooks.com/articles/archives/2010/nov/11/how-supreme-court-should-and-should-not-work/
2. A bit of the above:
The case for judicial review—the role of courts in enforcing constitutional limits on government power—was memorably made in a 1998 lecture by Aharon Barak, then the president of the Supreme Court of Israel. Before World War II, Justice Barak said, democracies outside of the United States relied for the protection of minorities on the self-restraint of political majorities. But “the twentieth century shattered this approach.” The war and the Holocaust taught the lesson that “it is vital to place formal limits on the power of the majority; that the concept ‘It is not done’ needs to receive the formal expression that ‘It is forbidden.’” Many countries adopted the American model of a written constitution whose rules are enforced by judges.
But why judges?
Because, Justice Barak said, the institution chosen to enforce the constitution must be independent, with members professionally trained to interpret laws:
Of course, for the delicate balance to be maintained, the judge must act objectively. He or she must express the basic values of the constitution—not personal values, and not the values of changing majorities.
By those standards, the Supreme Court of the United States today falls short of justifying its great constitutional function. A headstrong conservative majority is writing personal ideology into law. Freedom of speech is given novel and sweeping sway when the would-be speaker is a corporation but is denied when the speaker wants to try to persuade terrorists to give up violence for peaceful politics.
The Court is so riven by partisanship that justices even pick their law clerks in ways influenced by ideology: one conservative justice, Clarence Thomas, has never chosen a clerk from the chambers of an appeals court judge appointed by a Democratic president.[
In 2000, a 5–4 majority of the Court made George W. Bush president on grounds not found in the Constitution, which commits the resolution of contested presidential elections to Congress. The opinions seeking to justify that result did not remove the aroma of a political decision.
Against that unhappy background Justice Stephen Breyer has done something unusual. He has written a calm, reasoned book about how the Supreme Court should do its work and how, in history, it has sometimes failed the challenge. Fair warning: I am a friend of Justice Breyer. But I think his book is a remarkable contribution to educating the public about our constitutional system and those whose job it is to guard its boundaries.
Justice Breyer begins by discussing what he calls the Supreme Court’s democratic legitimacy. Today it is almost universally assumed that the Court’s decisions, however unpopular, will be accepted by the public and the government and will be obeyed. But that was not always so. Breyer describes meeting the chief justice of an African country who asked him, “Why do Americans do what the courts say?” Breyer answered that there was no secret,
no magic words on paper. Following the law is a matter of custom, of habit …
Posner on Aharon Barak:
Enlightened Despot//April 23, 2007//TNR
The Judge in a Democracy By Aharon Barak
Aharon Barak, a long-serving justice (eventually the chief justice) of the Supreme Court of Israel, who recently reached mandatory retirement age, is a prolific writer, and this is his most recent book. It is an important document, less for its intrinsic merits than for its aptness to be considered Exhibit A for why American judges should be extremely wary about citing foreign judicial decisions. Barak is a world-famous judge who dominated his court as completely as John Marshall dominated our Supreme Court.
If therewere a Nobel Prize for law, Barak would probably be an early recipient. But although he is familiar with the American legal system and supposes himself to be in some sort of sync with liberal American judges, he actually inhabits a completely different--and,to an American, a weirdly different--juristic universe. I have my differences with Robert Bork, but when he remarked, in a review of The Judge in a Democracy, that Barak "establishes a world record for judicial hubris," he came very near the truth.
Barak is John Marshall without a constitution to expound--or to"expand," as Barak once revealingly misquoted a famous phrase of Marshall's ("we must never forget it is a constitution that we areexpounding"). Israel does not have a constitution. It has "Basic Laws" passed by the Knesset, Israel's parliament, which Barak has equated to a constitution by holding that the Knesset cannot repeal them. That is an amazing idea: could our Congress pass a law authorizing every American to carry a concealed weapon, and the Supreme Court declare that the law could never be repealed? And only one-quarter of the Knesset's members voted for those laws!
What Barak created out of whole cloth was a degree of judicial power undreamed of even by our most aggressive Supreme Court justices.
He puts Marshall, who did less with more, in the shade. (He borrowed from Marshall the trick of first announcing a novel rule in a case in which he concludes that the rule does not apply, so that people get accustomed to the rule before it begins to bite them.) Among the rules of law that Barak's judicial opinions have been instrumental in creating that have no counterpart in American law are that judges cannot be removed by the legislature, but only by other judges; that any citizen can ask a court to block illegal action by a government official, even if the citizen is not personally affected by it (or lacks "standing" to sue, in theAmerican sense); that any government action that is "unreasonable"is illegal ("put simply, the executive must act reasonably, for an unreasonable act is an unlawful act"); that a court can forbid thegovernment to appoint an official who had committed a crime (even though he had been pardoned) or is otherwise ethically challenged, and can order the dismissal of a cabinet minister because he faces criminal proceedings; that in the name of "human dignity" a cour tcan compel the government to alleviate homelessness and poverty;and that a court can countermand military orders, decide "whether to prevent the release of a terrorist within the framework of a political 'package deal,'" and direct the government to move thesecurity wall that keeps suicide bombers from entering Israel from the West Bank.
These are powers that a nation could grant its judges. For example, many European nations and even some states in the United States authorize "abstract" constitutional review--that is, judicial determination of a statute's constitutionality without waiting for a suit by someone actually harmed by the statute. But only in Israel (as far as I know) do judges confer the power of abstract review on themselves, without benefit of a constitutional or legislative provision. One is reminded of Napoleon's taking the crown out of the pope's hands and putting it on his own head.
Barak does not attempt to defend his judicial practice by reference to orthodox legal materials; even the "Basic Laws" are mentioned only in passing. His method, lacking as it does any but incidental references to enacted provisions, may seem the method of the commonlaw (the judge-made law that continues to dominate many areas ofAnglo-American law, such as contracts and torts), except that common-law rules are subject to legislative override, and his rules are not. The significance of this point seems to elude him.
Hetakes for granted that judges have inherent authority to override statutes. Such an approach can accurately be described as usurpative.
Barak bases his conception of judicial authority on abstract principles that in his hands are plays on words. The leading abstraction is "democracy." Political democracy in the modern sense means a system of government in which the key officials stand for election at relatively short intervals and thus are accountable to the citizenry. A judiciary that is free to override the decisions of those officials curtails democracy. For Barak, however,democracy has a "substantive" component, namely a set of rights ("human rights" not limited to political rights, such as the right to criticize public officials, that support democracy), enforced by the judiciary, that clips the wings of the elected officials. That is not a justification for a hyperactive judiciary, it is merely a redefinition of it.
Another portmanteau word that Barak abuses is "interpretation,"which for him is remote from a search for the meaning intended authors of legislation. He says that the task of a legislature in passing statutes is "to bridge the gap between law and society,"and that the task of the judge in interpreting a statute is to"ensure that the law in fact bridges the gap between law and society." This is very odd--isn't the statute the law, rather than the intermediary between the law and the society? What he seems to mean, as further suggested by his statement that "whoever enforces a statute enforces the whole legal system," is that a statute should be interpreted so that it is harmonious with the spirit or values of the legal system as a whole, which as a practical matter means with the judge's ideal system, since no real legal system has a unitary spirit or common set of values.
This understanding of Barak's approach is further suggested by his statement that a judge, in addition to considering the language and background and apparent purpose of a statute, should consider its"objective purpose ... to realize the fundamental values of democracy." This opens up a vast realm for discretionary judgment (the antithesis of "objective"); and when a judge has discretion in interpreting a statute, Barak's "advice is that ... the judge should aspire to achieve justice." So a regulation that authorizes military censorship of publications that the censor "deems likely to harm state security, public security, or the public peace" was interpreted by Barak's court to mean "would create a near certainty of grave harm to state security, public security, or public peace."It is thus the court that makes Israel's statutory law, using the statutes themselves as first drafts that the court is free to rewrite.
Barak invokes the "separation of powers" as further support for his aggressive conception of the judicial role. What he means byseparation of powers is that the executive and legislative branches are to have no degree of control over the judicial branch. What we mean by separation of powers, so far as judicial authority is concerned, is that something called the judicial power of the United States has been consigned to the judicial branch. That doesn't mean the branch is independent of the other branches. If each of the powers (executive, legislative, and judicial) were administered by a branch that was wholly independent and thus could ignore the others, the result would be chaos.
The branches have to be mutually dependent, in order to force cooperation. So"separation of powers" implies "checks and balances," and the judicial branch has to be checked by the other branches, and not just do the checking. And so rather than our judiciary being a self-perpetuating oligarchy, the president nominates and the Senate confirms (or rejects) federal judges, and Congress fixes their salaries, regulates the Supreme Court's appellate jurisdiction, decides whether to create other federal courts, determines thef ederal judiciary's budget, and can remove judges by means of thei mpeachment process.
Moreover, the judicial power of the UnitedStates can be exercised only in suits brought by persons who have standing to sue in the sense of having a tangible grievance thatcan be remedied by the court. And because the judicial power is not the only federal power--there are executive and legislative powers of constitutional dignity as well--the judiciary cannot tell the president whom to appoint to his cabinet.
In Barak's conception of the separation of powers, the judicial power is unlimited and the legislature cannot remove judges. (And in Israel, judges participate in the selection of judges.) Outfitted with such abstractions as "democracy," "interpretation,""separation of powers," "objectivity," "reasonableness" (it is "the concept of reasonableness" that Barak would have used to adjudicate the "package deal" for the release of the terrorist), and of course "justice" ("I try to be guided by my North Star, which is justice. I try to make law and justice converge, so that the Justice will do justice"), a judge is a law unto himself.
Barak's jurisprudence may seem to hold no interest for Americans other than as an illustration of the world's diversity. But in factbit has important implications for the controversial issue ofwhether American judges should cite foreign cases as authority.
I must explain what I mean by "as authority." There is no objection to citing a foreign judicial opinion because it contains an insightthat bears on the case at hand, just as one might cite a book or an article. But that is different from treating the foreign decision as a "precedent," in the legal sense of a decision that has weight irrespective of the cogency of its reasoning. Some American judges think that just the fact that a foreign court has decided a case in a certain way is entitled to some weight in deciding a similar American case. So if a foreign supreme court has held that executing juvenile murderers is unconstitutional, its decision, even if not impressively reasoned, is one more twig to place in one of the pans of the scales of justice.
But what we learn from Barak's book is that some foreign legal systems, even the legal system of a democratic nation that is a close ally of the United States, are so alien to our own systemthat their decisions ought to be given no weight by our courts. American judges distinguish between how they might vote on a statute if they were legislators and whether the statute is unconstitutional; they might think it a bad statute yet uphold its constitutionality. But in a Barak-dominated court, it would be very difficult to tell whether a judgment of unconstitutionality was anything more than the judges' opinion that it was a dumb statute, something they would not have voted for if they were legislators. And such an opinion would have no significance at all for the question of constitutionality.
When Robert Bork attributes "judicial hubris" to Barak, he is using as his benchmark the American system. Many Israelis think Barak hubristic, but whether he is or is not in the Israeli setting is irrelevant to Bork's judgment. All Bork means is that a judge who thinks like Barak is playing outside the boundaries within which American judges operate. Not that there are no hubristic American decisions, of course; but their authors make some effort to tether them to orthodox legal materials, such as the constitutional text.
The tether is long and frayed when, for example, a judge decides that criminalizing abortion, or refusing to grant a marriage license to a homosexual couple, is a deprivation of liberty without due process of law. Such decisions could be thought lawless in thesense that the judge is making a discretionary judgment that owes nothing to an authoritative text and everything to the judge'spersonal values. So there is a sense in which Barak merely carries to its logical extreme a tendency discernible in our courts. It is a matter of degree, but at some point a difference in degree can rightly be called a difference in kind.
Barak's book is not introspective. He purports to derive hisjudicial approach from the abstractions that I mentioned, but they cannot be the real source of his jurisprudence, because they are as empty as they are lofty.
In places the book is naive, as when Barak writes that "other branches [of government] seek to attain efficiency; the courts seek to attain legality." Or when, in defending a ruling made during the Gulf war in 1991 requiring th eIsraeli army to distribute more gas masks to residents of the West Bank, Barak says that "we did not intervene in military considerations, for which the expertise and responsibility lie with the executive. Rather, we intervened in considerations of equality, for which the expertise and responsibility rest with the judiciary."
Yet the book strongly commends the balancing ofcompeting interests as a technique of judicial decision-making, implying that in the gas- mask case the court should have balanced against considerations of equality whatever military reasons the army gave for distributing fewer gas masks on the West Bank than inIsrael proper, such as that Iraq was more likely to aim its missiles at Jews than at Arabs. A few pages after the gas masks Barak writes inconsistently that when deciding whether a security measure, "the court asks if a reasonable person responsible for security would be prudent to adopt thesecurity measures that were adopted."
The book is, in fact, rather unsophisticated, as if written for a non professional audience. (It is also riddled with minor errors, such as renaming me "Robert Posner.") But it has some good points,such as its discussion of the things besides justice that judges should consider in interpreting a statute, bridging that mysterious gap between law and society, and objective purpose "at the highest level of abstraction" (the level at which the objective purpose is to realize the ideals of democracy). And the chapter on terrorism that I have just been criticizing rightly observes that judicial decisions restricting civil liberties in wartime may serve as precedents for restricting such liberties in peacetime, which to some extent has happened in the United States since September 11, and also that we do not need two systems of balancing security and liberty, one for wartime and one for peacetime--we can use one system, while recognizing, as Barak to his credit does, that security does have more weight in time of war. Nor do I mean to suggest that Barak's judicial oeuvre as a whole is hubristic.
The"Basic Laws" may not be a constitution, but they provide an adequate textual basis, even in American terms, for decisions that Barak has written forbidding discrimination against homosexuals and against Israel's Arab citizens.
And whatever the weaknesses of the book, Barak himself is by all accounts brilliant, as well as austere and high-minded--Israel's Cato. Israel is an immature democracy, poorly governed; its political class is mediocre and corrupt; it floats precariously in a lethally hostile Muslim sea; and it really could use a constitution. Barak stepped into a political and legal vacuum, and with dash and ingenuity orchestrated a series of (in LaurenceTribe's words on the dust jacket) "surprisingly agreeable outcomes."
He was a legal buccaneer, and maybe that was what Israel needed. But there is not a hint of an acknowledgment of this in the book. Barak writes not only without self-doubt, but also without a sense that his jurisprudence may reflect local, as well as personal, conditions. (He survived the Holocaust as a child in Lithuania, and this may help us to understand a position of his that would be thought unacceptably illiberal in the United States: that no member of an anti- democratic party can be permitted to stand for election to the Knesset, since the Nazi Party came topower in Germany democratically.) He pities our Supreme Courtjustices their timidity.
No wonder he frightens Robert Bork.
Me:
There is an interesting tension in Posner's excellent and balanced review.
At one point he says, and as before quoted:
...What he seems to mean, as further suggested by his statement that "whoever enforces a statute enforces the whole legal system," is that a statute should be interpreted so that it is harmonious with the spirit or values of the legal system as a whole, which as a practical matter means with the judge's ideal system, since no real legal system has a unitary spirit or common set of values...
And in that context he criticizes Barak for using empty abstractions--"as lofty as they are empty"...such as reasonableness and justice as covers for his own judicial law making, which Posner excoriates.
But in listing what is good about the book he reviews, he points to this amongst other things:
....But it has some good points,such as its discussion of the things besides justice that judges should consider in interpreting a statute, bridging that mysterious gap between law and society, and objective purpose "at the highest level of abstraction" (the level at which the objective purpose is to realize the ideals of democracy)....
That tension ties in with Anthony Lewis's review of Justice Bryer's new book. Lewis begins by lauding Barak's philosophical case for judicial review and goes on to note Breyer's argument, against originalism (by the way) that evolving democratic values underlying the American Constitution's words are to guide judges in their resolution of consitutional controversies.
Sunday, October 31, 2010
Friday, October 29, 2010
Liberal Interventionism and the Case of Iraq: Discredited?
How do you regret saved lives?
Abe Greenwald - 10.28.2010//Contentions
In the Globe and Mail, Margaret Wente writes of her regret at having supported the Iraq War as a liberal interventionist. She now claims she was “deluded” to think there was a sound humanitarian justification for the invasion in 2003. What has prompted this apologia? Details in the Iraq files just released by Wikileaks:
...The abuses at the Abu Ghraib prison were mild compared with the atrocities inflicted by Iraqis on each other. The Shia-controlled Interior Ministry ran secret jails in which inmates, most of them Sunnis, endured the same kinds of torture as those inflicted by Saddam. They were burned with boiling water, had their fingers amputated, and had electroshock applied to their genitals. When U.S. forces discovered the brutalities, they simply filled out incident reports and forwarded them to the local authorities....
No human is immune to hearing about that kind of brutality. Which is why a little context actually strengthens the liberal-interventionist position.
The math is simple, if disturbing: According to estimates from human rights organizations, from 1979 to 2003, Saddam Hussein probably killed 800,000 to 1 million people, many through methods similar to the ones detailed above. This puts his annual average in the neighborhood of 45,000 murders. At that rate, if he were left in power, he would have killed 360,000 since 2003.
As of today, the website Iraqbodycount.org puts the total number of civilian deaths caused by the Iraq War between 98,585 and 107,594.
In what universe is 100,000 dead worse than 360,000?
Moreover, consider how the annual Iraqi body count will likely continue to plummet in the coming years. Add to that, the prospect—shaky though it may be—of a functioning Iraqi democracy. Under Saddam, it would have been 360,000 dead with no chance of ebbing the slaughter and no hope for freedom.
There is no humanitarian justification for regretting Saddam’s ouster, even accounting for the coalition’s mistakes. There are other less quantifiably false arguments against the war. One might make the case that it cost too much to prosecute or turned world opinion against the U.S., for example. But hand-wringing over the net carnage just doesn’t compute. The facts are too easily obtained for Margaret Wente not to understand this. If she regrets her support for the war it can only mean she’s decided that the humanitarian component is not as important as she had once believed.
Me:
Strengthening Greenwald's argument: look who would have succeeded Saddam Hussein but for America.
Abe Greenwald - 10.28.2010//Contentions
In the Globe and Mail, Margaret Wente writes of her regret at having supported the Iraq War as a liberal interventionist. She now claims she was “deluded” to think there was a sound humanitarian justification for the invasion in 2003. What has prompted this apologia? Details in the Iraq files just released by Wikileaks:
...The abuses at the Abu Ghraib prison were mild compared with the atrocities inflicted by Iraqis on each other. The Shia-controlled Interior Ministry ran secret jails in which inmates, most of them Sunnis, endured the same kinds of torture as those inflicted by Saddam. They were burned with boiling water, had their fingers amputated, and had electroshock applied to their genitals. When U.S. forces discovered the brutalities, they simply filled out incident reports and forwarded them to the local authorities....
No human is immune to hearing about that kind of brutality. Which is why a little context actually strengthens the liberal-interventionist position.
The math is simple, if disturbing: According to estimates from human rights organizations, from 1979 to 2003, Saddam Hussein probably killed 800,000 to 1 million people, many through methods similar to the ones detailed above. This puts his annual average in the neighborhood of 45,000 murders. At that rate, if he were left in power, he would have killed 360,000 since 2003.
As of today, the website Iraqbodycount.org puts the total number of civilian deaths caused by the Iraq War between 98,585 and 107,594.
In what universe is 100,000 dead worse than 360,000?
Moreover, consider how the annual Iraqi body count will likely continue to plummet in the coming years. Add to that, the prospect—shaky though it may be—of a functioning Iraqi democracy. Under Saddam, it would have been 360,000 dead with no chance of ebbing the slaughter and no hope for freedom.
There is no humanitarian justification for regretting Saddam’s ouster, even accounting for the coalition’s mistakes. There are other less quantifiably false arguments against the war. One might make the case that it cost too much to prosecute or turned world opinion against the U.S., for example. But hand-wringing over the net carnage just doesn’t compute. The facts are too easily obtained for Margaret Wente not to understand this. If she regrets her support for the war it can only mean she’s decided that the humanitarian component is not as important as she had once believed.
Me:
Strengthening Greenwald's argument: look who would have succeeded Saddam Hussein but for America.
For the First Time Shelby Steele Disappoints Me: a Silly Screed on Obama that Some Right Wingers Say is Not to be Missed
OCTOBER 28, 2010//WaPo
A Referendum on the Redeemer
Barack Obama put the Democrats in the position of forever redeeming a fallen nation rather than leading a great one.
Whether or not the Republicans win big next week, it is already clear that the "transformative" aspirations of the Obama presidency—the special promise of this first black president to "change" us into a better society—are much less likely to materialize. There will be enough Republican gains to make the "no" in the "party of no" even more formidable, if not definitive.
But apart from this politics of numbers, there is also now a deepening disenchantment with Barack Obama himself. (He has a meager 37% approval rating by the latest Harris poll.) His embarrassed supporters console themselves that their intentions were good; their vote helped make history. But for Mr. Obama himself there is no road back to the charisma and political capital he enjoyed on his inauguration day.
How is it that Barack Obama could step into the presidency with an air of inevitability and then, in less than two years, find himself unwelcome at the campaign rallies of many of his fellow Democrats?
The first answer is well-known: His policymaking has been grandiose, thoughtless and bullying. His health-care bill was ambitious to the point of destructiveness and, finally, so chaotic that today no citizen knows where they stand in relation to it. His financial-reform bill seems little more than a short-sighted scapegoating of Wall Street. In foreign policy he has failed to articulate a role for America in the world. We don't know why we do what we do in foreign affairs. George W. Bush at least made a valiant stab at an American rationale—democratization—but with Mr. Obama there is nothing.
All this would be enough to explain the disillusionment with this president—and with the Democratic Party that he leads. But there is also a deeper disjunction. There is an "otherness" about Mr. Obama, the sense that he is somehow not truly American. "Birthers" doubt that he was born on American soil. Others believe that he is secretly a Muslim, or in quiet simpatico with his old friends, Rev. Jeremiah Wright and Bill Ayers, now icons of American radicalism.
But Barack Obama is not an "other" so much as he is a child of the 1960s.
His coming of age paralleled exactly the unfolding of a new "counterculture" American identity. And this new American identity—and the post-1960s liberalism it spawned—is grounded in a remarkable irony: bad faith in America as virtue itself, bad faith in the classic American identity of constitutional freedom and capitalism as the way to a better America. So Mr. Obama is very definitely an American, and he has a broad American constituency. He is simply the first president we have seen grounded in this counterculture American identity. When he bows to foreign leaders, he is not displaying "otherness" but the counterculture Americanism of honorable self-effacement in which America acknowledges its own capacity for evil as prelude to engagement.
Bad faith in America became virtuous in the '60s when America finally acknowledged so many of its flagrant hypocrisies: the segregation of blacks, the suppression of women, the exploitation of other minorities, the "imperialism" of the Vietnam War, the indifference to the environment, the hypocrisy of puritanical sexual mores and so on. The compounding of all these hypocrisies added up to the crowning idea of the '60s: that America was characterologically evil. Thus the only way back to decency and moral authority was through bad faith in America and its institutions, through the presumption that evil was America's natural default position.
Among today's liberal elite, bad faith in America is a sophistication, a kind of hipness. More importantly, it is the perfect formula for political and governmental power. It rationalizes power in the name of intervening against evil—I will use the government to intervene against the evil tendencies of American life (economic inequality, structural racism and sexism, corporate greed, neglect of the environment and so on), so I need your vote.
"Hope and Change" positioned Mr. Obama as a conduit between an old America worn down by its evil inclinations and a new America redeemed of those inclinations. There was no vision of the future in "Hope and Change." It is an expression of bad faith in America, but its great ingenuity was to turn that bad faith into political motivation, into votes.
But there is a limit to bad faith as power, and Mr. Obama and the Democratic Party may have now reached that limit. The great weakness of bad faith is that it disallows American exceptionalism as a rationale for power. It puts Mr. Obama and the Democrats in the position of forever redeeming a fallen nation, rather than leading a great nation. They bet on America's characterological evil and not on her sense of fairness, generosity or ingenuity.
When bad faith is your framework (Michelle Obama never being proud of her country until it supported her husband), then you become more a national scold than a real leader. You lead out of a feeling that your opposition is really only the latest incarnation of that old characterological evil that you always knew was there. Thus the tea party—despite all the evidence to the contrary—is seen as racist and bigoted.
But isn't the tea party, on some level, a reaction to a president who seems not to fully trust the fundamental decency of the American people? Doesn't the tea party fill a void left open by Mr. Obama's ethos of bad faith? Aren't tea partiers, and their many fellow travelers, simply saying that American exceptionalism isn't racism? And if the mainstream media see tea partiers as bumpkins and racists, isn't this just more bad faith—characterizing people as ignorant or evil so as to dismiss them?
Our great presidents have been stewards, men who broadly identified with the whole of America. Stewardship meant responsibility even for those segments of America where one might be reviled. Surely Mr. Obama would claim such stewardship. But he has functioned more as a redeemer than a steward, a leader who sees a badness in us from which we must be redeemed. Many Americans are afraid of this because a mandate as grandiose as redemption justifies a vast expansion of government. A redeemer can't just tweak and guide a faltering economy; he will need a trillion- dollar stimulus package. He can't take on health care a step at a time; he must do it all at once, finally mandating that every citizen buy in.
Next week's election is, among other things, a referendum on the idea of president-as- redeemer. We have a president so determined to transform and redeem us from what we are that, by his own words, he is willing to risk being a one-term president. People now wonder if Barack Obama can pivot back to the center like Bill Clinton did after his set-back in '94. But Mr. Clinton was already a steward, a policy wonk, a man of the center. Mr. Obama has to change archetypes.
Me: A few examples, certainly not exhaustive:
1. Obama is a policy bully, ram rodding his policies through, their unpopularity notwithstanding;
2. He sees Americans not as fellow citizens but as "sociological case studies";
3. He embodies the sixties "counter culture";
4. He espouses "bad faith as hipness";
5. Obama banks on U.S. "charatcerlogical evil";
6. He has no coherent foreign policy;
7. (throwaway) Michelle Obama was unproud of America till she became its first lady.
Please!
How tired is all this!
A Referendum on the Redeemer
Barack Obama put the Democrats in the position of forever redeeming a fallen nation rather than leading a great one.
Whether or not the Republicans win big next week, it is already clear that the "transformative" aspirations of the Obama presidency—the special promise of this first black president to "change" us into a better society—are much less likely to materialize. There will be enough Republican gains to make the "no" in the "party of no" even more formidable, if not definitive.
But apart from this politics of numbers, there is also now a deepening disenchantment with Barack Obama himself. (He has a meager 37% approval rating by the latest Harris poll.) His embarrassed supporters console themselves that their intentions were good; their vote helped make history. But for Mr. Obama himself there is no road back to the charisma and political capital he enjoyed on his inauguration day.
How is it that Barack Obama could step into the presidency with an air of inevitability and then, in less than two years, find himself unwelcome at the campaign rallies of many of his fellow Democrats?
The first answer is well-known: His policymaking has been grandiose, thoughtless and bullying. His health-care bill was ambitious to the point of destructiveness and, finally, so chaotic that today no citizen knows where they stand in relation to it. His financial-reform bill seems little more than a short-sighted scapegoating of Wall Street. In foreign policy he has failed to articulate a role for America in the world. We don't know why we do what we do in foreign affairs. George W. Bush at least made a valiant stab at an American rationale—democratization—but with Mr. Obama there is nothing.
All this would be enough to explain the disillusionment with this president—and with the Democratic Party that he leads. But there is also a deeper disjunction. There is an "otherness" about Mr. Obama, the sense that he is somehow not truly American. "Birthers" doubt that he was born on American soil. Others believe that he is secretly a Muslim, or in quiet simpatico with his old friends, Rev. Jeremiah Wright and Bill Ayers, now icons of American radicalism.
But Barack Obama is not an "other" so much as he is a child of the 1960s.
His coming of age paralleled exactly the unfolding of a new "counterculture" American identity. And this new American identity—and the post-1960s liberalism it spawned—is grounded in a remarkable irony: bad faith in America as virtue itself, bad faith in the classic American identity of constitutional freedom and capitalism as the way to a better America. So Mr. Obama is very definitely an American, and he has a broad American constituency. He is simply the first president we have seen grounded in this counterculture American identity. When he bows to foreign leaders, he is not displaying "otherness" but the counterculture Americanism of honorable self-effacement in which America acknowledges its own capacity for evil as prelude to engagement.
Bad faith in America became virtuous in the '60s when America finally acknowledged so many of its flagrant hypocrisies: the segregation of blacks, the suppression of women, the exploitation of other minorities, the "imperialism" of the Vietnam War, the indifference to the environment, the hypocrisy of puritanical sexual mores and so on. The compounding of all these hypocrisies added up to the crowning idea of the '60s: that America was characterologically evil. Thus the only way back to decency and moral authority was through bad faith in America and its institutions, through the presumption that evil was America's natural default position.
Among today's liberal elite, bad faith in America is a sophistication, a kind of hipness. More importantly, it is the perfect formula for political and governmental power. It rationalizes power in the name of intervening against evil—I will use the government to intervene against the evil tendencies of American life (economic inequality, structural racism and sexism, corporate greed, neglect of the environment and so on), so I need your vote.
"Hope and Change" positioned Mr. Obama as a conduit between an old America worn down by its evil inclinations and a new America redeemed of those inclinations. There was no vision of the future in "Hope and Change." It is an expression of bad faith in America, but its great ingenuity was to turn that bad faith into political motivation, into votes.
But there is a limit to bad faith as power, and Mr. Obama and the Democratic Party may have now reached that limit. The great weakness of bad faith is that it disallows American exceptionalism as a rationale for power. It puts Mr. Obama and the Democrats in the position of forever redeeming a fallen nation, rather than leading a great nation. They bet on America's characterological evil and not on her sense of fairness, generosity or ingenuity.
When bad faith is your framework (Michelle Obama never being proud of her country until it supported her husband), then you become more a national scold than a real leader. You lead out of a feeling that your opposition is really only the latest incarnation of that old characterological evil that you always knew was there. Thus the tea party—despite all the evidence to the contrary—is seen as racist and bigoted.
But isn't the tea party, on some level, a reaction to a president who seems not to fully trust the fundamental decency of the American people? Doesn't the tea party fill a void left open by Mr. Obama's ethos of bad faith? Aren't tea partiers, and their many fellow travelers, simply saying that American exceptionalism isn't racism? And if the mainstream media see tea partiers as bumpkins and racists, isn't this just more bad faith—characterizing people as ignorant or evil so as to dismiss them?
Our great presidents have been stewards, men who broadly identified with the whole of America. Stewardship meant responsibility even for those segments of America where one might be reviled. Surely Mr. Obama would claim such stewardship. But he has functioned more as a redeemer than a steward, a leader who sees a badness in us from which we must be redeemed. Many Americans are afraid of this because a mandate as grandiose as redemption justifies a vast expansion of government. A redeemer can't just tweak and guide a faltering economy; he will need a trillion- dollar stimulus package. He can't take on health care a step at a time; he must do it all at once, finally mandating that every citizen buy in.
Next week's election is, among other things, a referendum on the idea of president-as- redeemer. We have a president so determined to transform and redeem us from what we are that, by his own words, he is willing to risk being a one-term president. People now wonder if Barack Obama can pivot back to the center like Bill Clinton did after his set-back in '94. But Mr. Clinton was already a steward, a policy wonk, a man of the center. Mr. Obama has to change archetypes.
Me: A few examples, certainly not exhaustive:
1. Obama is a policy bully, ram rodding his policies through, their unpopularity notwithstanding;
2. He sees Americans not as fellow citizens but as "sociological case studies";
3. He embodies the sixties "counter culture";
4. He espouses "bad faith as hipness";
5. Obama banks on U.S. "charatcerlogical evil";
6. He has no coherent foreign policy;
7. (throwaway) Michelle Obama was unproud of America till she became its first lady.
Please!
How tired is all this!
The Disadvantages of an Elite Education
William Deresiewicz
June 1, 2008//American Scholar
It didn’t dawn on me that there might be a few holes in my education until I was about 35. I’d just bought a house, the pipes needed fixing, and the plumber was standing in my kitchen. There he was, a short, beefy guy with a goatee and a Red Sox cap and a thick Boston accent, and I suddenly learned that I didn’t have the slightest idea what to say to someone like him. So alien was his experience to me, so unguessable his values, so mysterious his very language, that I couldn’t succeed in engaging him in a few minutes of small talk before he got down to work. Fourteen years of higher education and a handful of Ivy League degrees, and there I was, stiff and stupid, struck dumb by my own dumbness. “Ivy retardation,” a friend of mine calls this. I could carry on conversations with people from other countries, in other languages, but I couldn’t talk to the man who was standing in my own house.
It’s not surprising that it took me so long to discover the extent of my miseducation, because the last thing an elite education will teach you is its own inadequacy. As two dozen years at Yale and Columbia have shown me, elite colleges relentlessly encourage their students to flatter themselves for being there, and for what being there can do for them. The advantages of an elite education are indeed undeniable. You learn to think, at least in certain ways, and you make the contacts needed to launch yourself into a life rich in all of society’s most cherished rewards. To consider that while some opportunities are being created, others are being cancelled and that while some abilities are being developed, others are being crippled is, within this context, not only outrageous, but inconceivable.
I’m not talking about curricula or the culture wars, the closing or opening of the American mind, political correctness, canon formation, or what have you. I’m talking about the whole system in which these skirmishes play out. Not just the Ivy League and its peer institutions, but also the mechanisms that get you there in the first place: the private and affluent public “feeder” schools, the ever-growing parastructure of tutors and test-prep courses and enrichment programs, the whole admissions frenzy and everything that leads up to and away from it. The message, as always, is the medium. Before, after, and around the elite college classroom, a constellation of values is ceaselessly inculcated. As globalization sharpens economic insecurity, we are increasingly committing ourselves—as students, as parents, as a society—to a vast apparatus of educational advantage. With so many resources devoted to the business of elite academics and so many people scrambling for the limited space at the top of the ladder, it is worth asking what exactly it is you get in the end—what it is we all get, because the elite students of today, as their institutions never tire of reminding them, are the leaders of tomorrow.
The first disadvantage of an elite education, as I learned in my kitchen that day, is that it makes you incapable of talking to people who aren’t like you. Elite schools pride themselves on their diversity, but that diversity is almost entirely a matter of ethnicity and race. With respect to class, these schools are largely—indeed increasingly—homogeneous. Visit any elite campus in our great nation and you can thrill to the heartwarming spectacle of the children of white businesspeople and professionals studying and playing alongside the children of black, Asian, and Latino businesspeople and professionals. At the same time, because these schools tend to cultivate liberal attitudes, they leave their students in the paradoxical position of wanting to advocate on behalf of the working class while being unable to hold a simple conversation with anyone in it. Witness the last two Democratic presidential nominees, Al Gore and John Kerry: one each from Harvard and Yale, both earnest, decent, intelligent men, both utterly incapable of communicating with the larger electorate.
But it isn’t just a matter of class. My education taught me to believe that people who didn’t go to an Ivy League or equivalent school weren’t worth talking to, regardless of their class. I was given the unmistakable message that such people were beneath me. We were “the best and the brightest,” as these places love to say, and everyone else was, well, something else: less good, less bright. I learned to give that little nod of understanding, that slightly sympathetic “Oh,” when people told me they went to a less prestigious college. (If I’d gone to Harvard, I would have learned to say “in Boston” when I was asked where I went to school—the Cambridge version of noblesse oblige.) I never learned that there are smart people who don’t go to elite colleges, often precisely for reasons of class. I never learned that there are smart people who don’t go to college at all.
I also never learned that there are smart people who aren’t “smart.” The existence of multiple forms of intelligence has become a commonplace, but however much elite universities like to sprinkle their incoming classes with a few actors or violinists, they select for and develop one form of intelligence: the analytic. While this is broadly true of all universities, elite schools, precisely because their students (and faculty, and administrators) possess this one form of intelligence to such a high degree, are more apt to ignore the value of others. One naturally prizes what one most possesses and what most makes for one’s advantages. But social intelligence and emotional intelligence and creative ability, to name just three other forms, are not distributed preferentially among the educational elite. The “best” are the brightest only in one narrow sense. One needs to wander away from the educational elite to begin to discover this.
What about people who aren’t bright in any sense?
I have a friend who went to an Ivy League college after graduating from a typically mediocre public high school. One of the values of going to such a school, she once said, is that it teaches you to relate to stupid people. Some people are smart in the elite-college way, some are smart in other ways, and some aren’t smart at all. It should be embarrassing not to know how to talk to any of them, if only because talking to people is the only real way of knowing them. Elite institutions are supposed to provide a humanistic education, but the first principle of humanism is Terence’s: “nothing human is alien to me.” The first disadvantage of an elite education is how very much of the human it alienates you from.
The second disadvantage, implicit in what I’ve been saying, is that an elite education inculcates a false sense of self-worth. Getting to an elite college, being at an elite college, and going on from an elite college—all involve numerical rankings: SAT, GPA, GRE. You learn to think of yourself in terms of those numbers. They come to signify not only your fate, but your identity; not only your identity, but your value. It’s been said that what those tests really measure is your ability to take tests, but even if they measure something real, it is only a small slice of the real. The problem begins when students are encouraged to forget this truth, when academic excellence becomes excellence in some absolute sense, when “better at X” becomes simply “better.”
There is nothing wrong with taking pride in one’s intellect or knowledge. There is something wrong with the smugness and self-congratulation that elite schools connive at from the moment the fat envelopes come in the mail. From orientation to graduation, the message is implicit in every tone of voice and tilt of the head, every old-school tradition, every article in the student paper, every speech from the dean. The message is: You have arrived. Welcome to the club. And the corollary is equally clear: You deserve everything your presence here is going to enable you to get. When people say that students at elite schools have a strong sense of entitlement, they mean that those students think they deserve more than other people because their SAT scores are higher.
At Yale, and no doubt at other places, the message is reinforced in embarrassingly literal terms. The physical form of the university—its quads and residential colleges, with their Gothic stone façades and wrought-iron portals—is constituted by the locked gate set into the encircling wall. Everyone carries around an ID card that determines which gates they can enter. The gate, in other words, is a kind of governing metaphor—because the social form of the university, as is true of every elite school, is constituted the same way. Elite colleges are walled domains guarded by locked gates, with admission granted only to the elect.
The aptitude with which students absorb this lesson is demonstrated by the avidity with which they erect still more gates within those gates, special realms of ever-greater exclusivity—at Yale, the famous secret societies, or as they should probably be called, the open-secret societies, since true secrecy would defeat their purpose. There’s no point in excluding people unless they know they’ve been excluded.
One of the great errors of an elite education, then, is that it teaches you to think that measures of intelligence and academic achievement are measures of value in some moral or metaphysical sense. But they’re not. Graduates of elite schools are not more valuable than stupid people, or talentless people, or even lazy people. Their pain does not hurt more. Their souls do not weigh more. If I were religious, I would say, God does not love them more. The political implications should be clear. As John Ruskin told an older elite, grabbing what you can get isn’t any less wicked when you grab it with the power of your brains than with the power of your fists. “Work must always be,” Ruskin says, “and captains of work must always be….[But] there is a wide difference between being captains…of work, and taking the profits of it.”
The political implications don’t stop there. An elite education not only ushers you into the upper classes; it trains you for the life you will lead once you get there. I didn’t understand this until I began comparing my experience, and even more, my students’ experience, with the experience of a friend of mine who went to Cleveland State.
There are due dates and attendance requirements at places like Yale, but no one takes them very seriously. Extensions are available for the asking; threats to deduct credit for missed classes are rarely, if ever, carried out. In other words, students at places like Yale get an endless string of second chances. Not so at places like Cleveland State. My friend once got a D in a class in which she’d been running an A because she was coming off a waitressing shift and had to hand in her term paper an hour late.
That may be an extreme example, but it is unthinkable at an elite school. Just as unthinkably, she had no one to appeal to. Students at places like Cleveland State, unlike those at places like Yale, don’t have a platoon of advisers and tutors and deans to write out excuses for late work, give them extra help when they need it, pick them up when they fall down. They get their education wholesale, from an indifferent bureaucracy; it’s not handed to them in individually wrapped packages by smiling clerks. There are few, if any, opportunities for the kind of contacts I saw my students get routinely—classes with visiting power brokers, dinners with foreign dignitaries.
There are also few, if any, of the kind of special funds that, at places like Yale, are available in profusion: travel stipends, research fellowships, performance grants. Each year, my department at Yale awards dozens of cash prizes for everything from freshman essays to senior projects. This year, those awards came to more than $90,000—in just one department.
Students at places like Cleveland State also don’t get A-’s just for doing the work. There’s been a lot of handwringing lately over grade inflation, and it is a scandal, but the most scandalous thing about it is how uneven it’s been. Forty years ago, the average GPA at both public and private universities was about 2.6, still close to the traditional B-/C+ curve. Since then, it’s gone up everywhere, but not by anything like the same amount. The average gpa at public universities is now about 3.0, a B; at private universities it’s about 3.3, just short of a B+. And at most Ivy League schools, it’s closer to 3.4.
But there are always students who don’t do the work, or who are taking a class far outside their field (for fun or to fulfill a requirement), or who aren’t up to standard to begin with (athletes, legacies). At a school like Yale, students who come to class and work hard expect nothing less than an A-. And most of the time, they get it.
In short, the way students are treated in college trains them for the social position they will occupy once they get out. At schools like Cleveland State, they’re being trained for positions somewhere in the middle of the class system, in the depths of one bureaucracy or another. They’re being conditioned for lives with few second chances, no extensions, little support, narrow opportunity—lives of subordination, supervision, and control, lives of deadlines, not guidelines.
At places like Yale, of course, it’s the reverse.
The elite like to think of themselves as belonging to a meritocracy, but that’s true only up to a point. Getting through the gate is very difficult, but once you’re in, there’s almost nothing you can do to get kicked out. Not the most abject academic failure, not the most heinous act of plagiarism, not even threatening a fellow student with bodily harm—I’ve heard of all three—will get you expelled. The feeling is that, by gosh, it just wouldn’t be fair—in other words, the self-protectiveness of the old-boy network, even if it now includes girls. Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls “entitled mediocrity.” A is the mark of excellence; A- is the mark of entitled mediocrity. It’s another one of those metaphors, not so much a grade as a promise. It means, don’t worry, we’ll take care of you. You may not be all that good, but you’re good enough.
Here, too, college reflects the way things work in the adult world (unless it’s the other way around). For the elite, there’s always another extension—a bailout, a pardon, a stint in rehab—always plenty of contacts and special stipends—the country club, the conference, the year-end bonus, the dividend. If Al Gore and John Kerry represent one of the characteristic products of an elite education, George W. Bush represents another. It’s no coincidence that our current president, the apotheosis of entitled mediocrity, went to Yale. Entitled mediocrity is indeed the operating principle of his administration, but as Enron and WorldCom and the other scandals of the dot-com meltdown demonstrated, it’s also the operating principle of corporate America.
The fat salaries paid to underperforming CEOs are an adult version of the A-. Anyone who remembers the injured sanctimony with which Kenneth Lay greeted the notion that he should be held accountable for his actions will understand the mentality in question—the belief that once you’re in the club, you’ve got a God-given right to stay in the club. But you don’t need to remember Ken Lay, because the whole dynamic played out again last year in the case of Scooter Libby, another Yale man.
If one of the disadvantages of an elite education is the temptation it offers to mediocrity, another is the temptation it offers to security. When parents explain why they work so hard to give their children the best possible education, they invariably say it is because of the opportunities it opens up. But what of the opportunities it shuts down? An elite education gives you the chance to be rich—which is, after all, what we’re talking about—but it takes away the chance not to be. Yet the opportunity not to be rich is one of the greatest opportunities with which young Americans have been blessed. We live in a society that is itself so wealthy that it can afford to provide a decent living to whole classes of people who in other countries exist (or in earlier times existed) on the brink of poverty or, at least, of indignity.
You can live comfortably in the United States as a schoolteacher, or a community organizer, or a civil rights lawyer, or an artist—that is, by any reasonable definition of comfort. You have to live in an ordinary house instead of an apartment in Manhattan or a mansion in L.A.; you have to drive a Honda instead of a BMW or a Hummer; you have to vacation in Florida instead of Barbados or Paris, but what are such losses when set against the opportunity to do work you believe in, work you’re suited for, work you love, every day of your life?
Yet it is precisely that opportunity that an elite education takes away. How can I be a schoolteacher—wouldn’t that be a waste of my expensive education? Wouldn’t I be squandering the opportunities my parents worked so hard to provide? What will my friends think? How will I face my classmates at our 20th reunion, when they’re all rich lawyers or important people in New York? And the question that lies behind all these: Isn’t it beneath me? So a whole universe of possibility closes, and you miss your true calling.
This is not to say that students from elite colleges never pursue a riskier or less lucrative course after graduation, but even when they do, they tend to give up more quickly than others. (Let’s not even talk about the possibility of kids from privileged backgrounds not going to college at all, or delaying matriculation for several years, because however appropriate such choices might sometimes be, our rigid educational mentality places them outside the universe of possibility—the reason so many kids go sleepwalking off to college with no idea what they’re doing there.) This doesn’t seem to make sense, especially since students from elite schools tend to graduate with less debt and are more likely to be able to float by on family money for a while. I wasn’t aware of the phenomenon myself until I heard about it from a couple of graduate students in my department, one from Yale, one from Harvard.
They were talking about trying to write poetry, how friends of theirs from college called it quits within a year or two while people they know from less prestigious schools are still at it. Why should this be? Because students from elite schools expect success, and expect it now. They have, by definition, never experienced anything else, and their sense of self has been built around their ability to succeed. The idea of not being successful terrifies them, disorients them, defeats them. They’ve been driven their whole lives by a fear of failure—often, in the first instance, by their parents’ fear of failure. The first time I blew a test, I walked out of the room feeling like I no longer knew who I was. The second time, it was easier; I had started to learn that failure isn’t the end of the world.
But if you’re afraid to fail, you’re afraid to take risks, which begins to explain the final and most damning disadvantage of an elite education: that it is profoundly anti-intellectual. This will seem counterintuitive. Aren’t kids at elite schools the smartest ones around, at least in the narrow academic sense? Don’t they work harder than anyone else—indeed, harder than any previous generation? They are. They do. But being an intellectual is not the same as being smart. Being an intellectual means more than doing your homework.
If so few kids come to college understanding this, it is no wonder. They are products of a system that rarely asked them to think about something bigger than the next assignment. The system forgot to teach them, along the way to the prestige admissions and the lucrative jobs, that the most important achievements can’t be measured by a letter or a number or a name. It forgot that the true purpose of education is to make minds, not careers.
Being an intellectual means, first of all, being passionate about ideas—and not just for the duration of a semester, for the sake of pleasing the teacher, or for getting a good grade. A friend who teaches at the University of Connecticut once complained to me that his students don’t think for themselves. Well, I said, Yale students think for themselves, but only because they know we want them to. I’ve had many wonderful students at Yale and Columbia, bright, thoughtful, creative kids whom it’s been a pleasure to talk with and learn from. But most of them have seemed content to color within the lines that their education had marked out for them. Only a small minority have seen their education as part of a larger intellectual journey, have approached the work of the mind with a pilgrim soul. These few have tended to feel like freaks, not least because they get so little support from the university itself. Places like Yale, as one of them put it to me, are not conducive to searchers.
Places like Yale are simply not set up to help students ask the big questions. I don’t think there ever was a golden age of intellectualism in the American university, but in the 19th century students might at least have had a chance to hear such questions raised in chapel or in the literary societies and debating clubs that flourished on campus. Throughout much of the 20th century, with the growth of the humanistic ideal in American colleges, students might have encountered the big questions in the classrooms of professors possessed of a strong sense of pedagogic mission. Teachers like that still exist in this country, but the increasingly dire exigencies of academic professionalization have made them all but extinct at elite universities. Professors at top research institutions are valued exclusively for the quality of their scholarly work; time spent on teaching is time lost. If students want a conversion experience, they’re better off at a liberal arts college.
When elite universities boast that they teach their students how to think, they mean that they teach them the analytic and rhetorical skills necessary for success in law or medicine or science or business. But a humanistic education is supposed to mean something more than that, as universities still dimly feel. So when students get to college, they hear a couple of speeches telling them to ask the big questions, and when they graduate, they hear a couple more speeches telling them to ask the big questions. And in between, they spend four years taking courses that train them to ask the little questions—specialized courses, taught by specialized professors, aimed at specialized students.
Although the notion of breadth is implicit in the very idea of a liberal arts education, the admissions process increasingly selects for kids who have already begun to think of themselves in specialized terms—the junior journalist, the budding astronomer, the language prodigy. We are slouching, even at elite schools, toward a glorified form of vocational training.
Indeed, that seems to be exactly what those schools want.
There’s a reason elite schools speak of training leaders, not thinkers—holders of power, not its critics. An independent mind is independent of all allegiances, and elite schools, which get a large percentage of their budget from alumni giving, are strongly invested in fostering institutional loyalty. As another friend, a third-generation Yalie, says, the purpose of Yale College is to manufacture Yale alumni. Of course, for the system to work, those alumni need money. At Yale, the long-term drift of students away from majors in the humanities and basic sciences toward more practical ones like computer science and economics has been abetted by administrative indifference.
The college career office has little to say to students not interested in law, medicine, or business, and elite universities are not going to do anything to discourage the large percentage of their graduates who take their degrees to Wall Street. In fact, they’re showing them the way. The liberal arts university is becoming the corporate university, its center of gravity shifting to technical fields where scholarly expertise can be parlayed into lucrative business opportunities.
It’s no wonder that the few students who are passionate about ideas find themselves feeling isolated and confused. I was talking with one of them last year about his interest in the German Romantic idea of bildung, the upbuilding of the soul. But, he said—he was a senior at the time—it’s hard to build your soul when everyone around you is trying to sell theirs.
Yet there is a dimension of the intellectual life that lies above the passion for ideas, though so thoroughly has our culture been sanitized of it that it is hardly surprising if it was beyond the reach of even my most alert students. Since the idea of the intellectual emerged in the 18th century, it has had, at its core, a commitment to social transformation. Being an intellectual means thinking your way toward a vision of the good society and then trying to realize that vision by speaking truth to power. It means going into spiritual exile. It means foreswearing your allegiance, in lonely freedom, to God, to country, and to Yale. It takes more than just intellect; it takes imagination and courage. “I am not afraid to make a mistake,” Stephen Dedalus says, “even a great mistake, a lifelong mistake, and perhaps as long as eternity, too.”
Being an intellectual begins with thinking your way outside of your assumptions and the system that enforces them. But students who get into elite schools are precisely the ones who have best learned to work within the system, so it’s almost impossible for them to see outside it, to see that it’s even there. Long before they got to college, they turned themselves into world-class hoop-jumpers and teacher-pleasers, getting A’s in every class no matter how boring they found the teacher or how pointless the subject, racking up eight or 10 extracurricular activities no matter what else they wanted to do with their time.
Paradoxically, the situation may be better at second-tier schools and, in particular, again, at liberal arts colleges than at the most prestigious universities. Some students end up at second-tier schools because they’re exactly like students at Harvard or Yale, only less gifted or driven. But others end up there because they have a more independent spirit. They didn’t get straight A’s because they couldn’t be bothered to give everything in every class. They concentrated on the ones that meant the most to them or on a single strong extracurricular passion or on projects that had nothing to do with school or even with looking good on a college application. Maybe they just sat in their room, reading a lot and writing in their journal. These are the kinds of kids who are likely, once they get to college, to be more interested in the human spirit than in school spirit, and to think about leaving college bearing questions, not resumés.
I’ve been struck, during my time at Yale, by how similar everyone looks. You hardly see any hippies or punks or art-school types, and at a college that was known in the ’80s as the Gay Ivy, few out lesbians and no gender queers. The geeks don’t look all that geeky; the fashionable kids go in for understated elegance. Thirty-two flavors, all of them vanilla. The most elite schools have become places of a narrow and suffocating normalcy. Everyone feels pressure to maintain the kind of appearance—and affect—that go with achievement. (Dress for success, medicate for success.)
I know from long experience as an adviser that not every Yale student is appropriate and well-adjusted, which is exactly why it worries me that so many of them act that way. The tyranny of the normal must be very heavy in their lives. One consequence is that those who can’t get with the program (and they tend to be students from poorer backgrounds) often polarize in the opposite direction, flying off into extremes of disaffection and self-destruction. But another consequence has to do with the large majority who can get with the program.
I taught a class several years ago on the literature of friendship. One day we were discussing Virginia Woolf’s novel The Waves, which follows a group of friends from childhood to middle age. In high school, one of them falls in love with another boy. He thinks, “To whom can I expose the urgency of my own passion?…There is nobody—here among these grey arches, and moaning pigeons, and cheerful games and tradition and emulation, all so skilfully organised to prevent feeling alone.” A pretty good description of an elite college campus, including the part about never being allowed to feel alone.
What did my students think of this, I wanted to know? What does it mean to go to school at a place where you’re never alone?
Well, one of them said, I do feel uncomfortable sitting in my room by myself. Even when I have to write a paper, I do it at a friend’s. That same day, as it happened, another student gave a presentation on Emerson’s essay on friendship. Emerson says, he reported, that one of the purposes of friendship is to equip you for solitude. As I was asking my students what they thought that meant, one of them interrupted to say, wait a second, why do you need solitude in the first place? What can you do by yourself that you can’t do with a friend?
So there they were: one young person who had lost the capacity for solitude and another who couldn’t see the point of it. There’s been much talk of late about the loss of privacy, but equally calamitous is its corollary, the loss of solitude. It used to be that you couldn’t always get together with your friends even when you wanted to. Now that students are in constant electronic contact, they never have trouble finding each other. But it’s not as if their compulsive sociability is enabling them to develop deep friendships. “To whom can I expose the urgency of my own passion?”: my student was in her friend’s room writing a paper, not having a heart-to-heart. She probably didn’t have the time; indeed, other students told me they found their peers too busy for intimacy.
What happens when busyness and sociability leave no room for solitude? The ability to engage in introspection, I put it to my students that day, is the essential precondition for living an intellectual life, and the essential precondition for introspection is solitude. They took this in for a second, and then one of them said, with a dawning sense of self-awareness, “So are you saying that we’re all just, like, really excellent sheep?” Well, I don’t know. But I do know that the life of the mind is lived one mind at a time: one solitary, skeptical, resistant mind at a time. The best place to cultivate it is not within an educational system whose real purpose is to reproduce the class system.
The world that produced John Kerry and George Bush is indeed giving us our next generation of leaders. The kid who’s loading up on AP courses junior year or editing three campus publications while double-majoring, the kid whom everyone wants at their college or law school but no one wants in their classroom, the kid who doesn’t have a minute to breathe, let alone think, will soon be running a corporation or an institution or a government. She will have many achievements but little experience, great success but no vision. The disadvantage of an elite education is that it’s given us the elite we have, and the elite we’re going to have.
Me: two words: sour grapes.
June 1, 2008//American Scholar
It didn’t dawn on me that there might be a few holes in my education until I was about 35. I’d just bought a house, the pipes needed fixing, and the plumber was standing in my kitchen. There he was, a short, beefy guy with a goatee and a Red Sox cap and a thick Boston accent, and I suddenly learned that I didn’t have the slightest idea what to say to someone like him. So alien was his experience to me, so unguessable his values, so mysterious his very language, that I couldn’t succeed in engaging him in a few minutes of small talk before he got down to work. Fourteen years of higher education and a handful of Ivy League degrees, and there I was, stiff and stupid, struck dumb by my own dumbness. “Ivy retardation,” a friend of mine calls this. I could carry on conversations with people from other countries, in other languages, but I couldn’t talk to the man who was standing in my own house.
It’s not surprising that it took me so long to discover the extent of my miseducation, because the last thing an elite education will teach you is its own inadequacy. As two dozen years at Yale and Columbia have shown me, elite colleges relentlessly encourage their students to flatter themselves for being there, and for what being there can do for them. The advantages of an elite education are indeed undeniable. You learn to think, at least in certain ways, and you make the contacts needed to launch yourself into a life rich in all of society’s most cherished rewards. To consider that while some opportunities are being created, others are being cancelled and that while some abilities are being developed, others are being crippled is, within this context, not only outrageous, but inconceivable.
I’m not talking about curricula or the culture wars, the closing or opening of the American mind, political correctness, canon formation, or what have you. I’m talking about the whole system in which these skirmishes play out. Not just the Ivy League and its peer institutions, but also the mechanisms that get you there in the first place: the private and affluent public “feeder” schools, the ever-growing parastructure of tutors and test-prep courses and enrichment programs, the whole admissions frenzy and everything that leads up to and away from it. The message, as always, is the medium. Before, after, and around the elite college classroom, a constellation of values is ceaselessly inculcated. As globalization sharpens economic insecurity, we are increasingly committing ourselves—as students, as parents, as a society—to a vast apparatus of educational advantage. With so many resources devoted to the business of elite academics and so many people scrambling for the limited space at the top of the ladder, it is worth asking what exactly it is you get in the end—what it is we all get, because the elite students of today, as their institutions never tire of reminding them, are the leaders of tomorrow.
The first disadvantage of an elite education, as I learned in my kitchen that day, is that it makes you incapable of talking to people who aren’t like you. Elite schools pride themselves on their diversity, but that diversity is almost entirely a matter of ethnicity and race. With respect to class, these schools are largely—indeed increasingly—homogeneous. Visit any elite campus in our great nation and you can thrill to the heartwarming spectacle of the children of white businesspeople and professionals studying and playing alongside the children of black, Asian, and Latino businesspeople and professionals. At the same time, because these schools tend to cultivate liberal attitudes, they leave their students in the paradoxical position of wanting to advocate on behalf of the working class while being unable to hold a simple conversation with anyone in it. Witness the last two Democratic presidential nominees, Al Gore and John Kerry: one each from Harvard and Yale, both earnest, decent, intelligent men, both utterly incapable of communicating with the larger electorate.
But it isn’t just a matter of class. My education taught me to believe that people who didn’t go to an Ivy League or equivalent school weren’t worth talking to, regardless of their class. I was given the unmistakable message that such people were beneath me. We were “the best and the brightest,” as these places love to say, and everyone else was, well, something else: less good, less bright. I learned to give that little nod of understanding, that slightly sympathetic “Oh,” when people told me they went to a less prestigious college. (If I’d gone to Harvard, I would have learned to say “in Boston” when I was asked where I went to school—the Cambridge version of noblesse oblige.) I never learned that there are smart people who don’t go to elite colleges, often precisely for reasons of class. I never learned that there are smart people who don’t go to college at all.
I also never learned that there are smart people who aren’t “smart.” The existence of multiple forms of intelligence has become a commonplace, but however much elite universities like to sprinkle their incoming classes with a few actors or violinists, they select for and develop one form of intelligence: the analytic. While this is broadly true of all universities, elite schools, precisely because their students (and faculty, and administrators) possess this one form of intelligence to such a high degree, are more apt to ignore the value of others. One naturally prizes what one most possesses and what most makes for one’s advantages. But social intelligence and emotional intelligence and creative ability, to name just three other forms, are not distributed preferentially among the educational elite. The “best” are the brightest only in one narrow sense. One needs to wander away from the educational elite to begin to discover this.
What about people who aren’t bright in any sense?
I have a friend who went to an Ivy League college after graduating from a typically mediocre public high school. One of the values of going to such a school, she once said, is that it teaches you to relate to stupid people. Some people are smart in the elite-college way, some are smart in other ways, and some aren’t smart at all. It should be embarrassing not to know how to talk to any of them, if only because talking to people is the only real way of knowing them. Elite institutions are supposed to provide a humanistic education, but the first principle of humanism is Terence’s: “nothing human is alien to me.” The first disadvantage of an elite education is how very much of the human it alienates you from.
The second disadvantage, implicit in what I’ve been saying, is that an elite education inculcates a false sense of self-worth. Getting to an elite college, being at an elite college, and going on from an elite college—all involve numerical rankings: SAT, GPA, GRE. You learn to think of yourself in terms of those numbers. They come to signify not only your fate, but your identity; not only your identity, but your value. It’s been said that what those tests really measure is your ability to take tests, but even if they measure something real, it is only a small slice of the real. The problem begins when students are encouraged to forget this truth, when academic excellence becomes excellence in some absolute sense, when “better at X” becomes simply “better.”
There is nothing wrong with taking pride in one’s intellect or knowledge. There is something wrong with the smugness and self-congratulation that elite schools connive at from the moment the fat envelopes come in the mail. From orientation to graduation, the message is implicit in every tone of voice and tilt of the head, every old-school tradition, every article in the student paper, every speech from the dean. The message is: You have arrived. Welcome to the club. And the corollary is equally clear: You deserve everything your presence here is going to enable you to get. When people say that students at elite schools have a strong sense of entitlement, they mean that those students think they deserve more than other people because their SAT scores are higher.
At Yale, and no doubt at other places, the message is reinforced in embarrassingly literal terms. The physical form of the university—its quads and residential colleges, with their Gothic stone façades and wrought-iron portals—is constituted by the locked gate set into the encircling wall. Everyone carries around an ID card that determines which gates they can enter. The gate, in other words, is a kind of governing metaphor—because the social form of the university, as is true of every elite school, is constituted the same way. Elite colleges are walled domains guarded by locked gates, with admission granted only to the elect.
The aptitude with which students absorb this lesson is demonstrated by the avidity with which they erect still more gates within those gates, special realms of ever-greater exclusivity—at Yale, the famous secret societies, or as they should probably be called, the open-secret societies, since true secrecy would defeat their purpose. There’s no point in excluding people unless they know they’ve been excluded.
One of the great errors of an elite education, then, is that it teaches you to think that measures of intelligence and academic achievement are measures of value in some moral or metaphysical sense. But they’re not. Graduates of elite schools are not more valuable than stupid people, or talentless people, or even lazy people. Their pain does not hurt more. Their souls do not weigh more. If I were religious, I would say, God does not love them more. The political implications should be clear. As John Ruskin told an older elite, grabbing what you can get isn’t any less wicked when you grab it with the power of your brains than with the power of your fists. “Work must always be,” Ruskin says, “and captains of work must always be….[But] there is a wide difference between being captains…of work, and taking the profits of it.”
The political implications don’t stop there. An elite education not only ushers you into the upper classes; it trains you for the life you will lead once you get there. I didn’t understand this until I began comparing my experience, and even more, my students’ experience, with the experience of a friend of mine who went to Cleveland State.
There are due dates and attendance requirements at places like Yale, but no one takes them very seriously. Extensions are available for the asking; threats to deduct credit for missed classes are rarely, if ever, carried out. In other words, students at places like Yale get an endless string of second chances. Not so at places like Cleveland State. My friend once got a D in a class in which she’d been running an A because she was coming off a waitressing shift and had to hand in her term paper an hour late.
That may be an extreme example, but it is unthinkable at an elite school. Just as unthinkably, she had no one to appeal to. Students at places like Cleveland State, unlike those at places like Yale, don’t have a platoon of advisers and tutors and deans to write out excuses for late work, give them extra help when they need it, pick them up when they fall down. They get their education wholesale, from an indifferent bureaucracy; it’s not handed to them in individually wrapped packages by smiling clerks. There are few, if any, opportunities for the kind of contacts I saw my students get routinely—classes with visiting power brokers, dinners with foreign dignitaries.
There are also few, if any, of the kind of special funds that, at places like Yale, are available in profusion: travel stipends, research fellowships, performance grants. Each year, my department at Yale awards dozens of cash prizes for everything from freshman essays to senior projects. This year, those awards came to more than $90,000—in just one department.
Students at places like Cleveland State also don’t get A-’s just for doing the work. There’s been a lot of handwringing lately over grade inflation, and it is a scandal, but the most scandalous thing about it is how uneven it’s been. Forty years ago, the average GPA at both public and private universities was about 2.6, still close to the traditional B-/C+ curve. Since then, it’s gone up everywhere, but not by anything like the same amount. The average gpa at public universities is now about 3.0, a B; at private universities it’s about 3.3, just short of a B+. And at most Ivy League schools, it’s closer to 3.4.
But there are always students who don’t do the work, or who are taking a class far outside their field (for fun or to fulfill a requirement), or who aren’t up to standard to begin with (athletes, legacies). At a school like Yale, students who come to class and work hard expect nothing less than an A-. And most of the time, they get it.
In short, the way students are treated in college trains them for the social position they will occupy once they get out. At schools like Cleveland State, they’re being trained for positions somewhere in the middle of the class system, in the depths of one bureaucracy or another. They’re being conditioned for lives with few second chances, no extensions, little support, narrow opportunity—lives of subordination, supervision, and control, lives of deadlines, not guidelines.
At places like Yale, of course, it’s the reverse.
The elite like to think of themselves as belonging to a meritocracy, but that’s true only up to a point. Getting through the gate is very difficult, but once you’re in, there’s almost nothing you can do to get kicked out. Not the most abject academic failure, not the most heinous act of plagiarism, not even threatening a fellow student with bodily harm—I’ve heard of all three—will get you expelled. The feeling is that, by gosh, it just wouldn’t be fair—in other words, the self-protectiveness of the old-boy network, even if it now includes girls. Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls “entitled mediocrity.” A is the mark of excellence; A- is the mark of entitled mediocrity. It’s another one of those metaphors, not so much a grade as a promise. It means, don’t worry, we’ll take care of you. You may not be all that good, but you’re good enough.
Here, too, college reflects the way things work in the adult world (unless it’s the other way around). For the elite, there’s always another extension—a bailout, a pardon, a stint in rehab—always plenty of contacts and special stipends—the country club, the conference, the year-end bonus, the dividend. If Al Gore and John Kerry represent one of the characteristic products of an elite education, George W. Bush represents another. It’s no coincidence that our current president, the apotheosis of entitled mediocrity, went to Yale. Entitled mediocrity is indeed the operating principle of his administration, but as Enron and WorldCom and the other scandals of the dot-com meltdown demonstrated, it’s also the operating principle of corporate America.
The fat salaries paid to underperforming CEOs are an adult version of the A-. Anyone who remembers the injured sanctimony with which Kenneth Lay greeted the notion that he should be held accountable for his actions will understand the mentality in question—the belief that once you’re in the club, you’ve got a God-given right to stay in the club. But you don’t need to remember Ken Lay, because the whole dynamic played out again last year in the case of Scooter Libby, another Yale man.
If one of the disadvantages of an elite education is the temptation it offers to mediocrity, another is the temptation it offers to security. When parents explain why they work so hard to give their children the best possible education, they invariably say it is because of the opportunities it opens up. But what of the opportunities it shuts down? An elite education gives you the chance to be rich—which is, after all, what we’re talking about—but it takes away the chance not to be. Yet the opportunity not to be rich is one of the greatest opportunities with which young Americans have been blessed. We live in a society that is itself so wealthy that it can afford to provide a decent living to whole classes of people who in other countries exist (or in earlier times existed) on the brink of poverty or, at least, of indignity.
You can live comfortably in the United States as a schoolteacher, or a community organizer, or a civil rights lawyer, or an artist—that is, by any reasonable definition of comfort. You have to live in an ordinary house instead of an apartment in Manhattan or a mansion in L.A.; you have to drive a Honda instead of a BMW or a Hummer; you have to vacation in Florida instead of Barbados or Paris, but what are such losses when set against the opportunity to do work you believe in, work you’re suited for, work you love, every day of your life?
Yet it is precisely that opportunity that an elite education takes away. How can I be a schoolteacher—wouldn’t that be a waste of my expensive education? Wouldn’t I be squandering the opportunities my parents worked so hard to provide? What will my friends think? How will I face my classmates at our 20th reunion, when they’re all rich lawyers or important people in New York? And the question that lies behind all these: Isn’t it beneath me? So a whole universe of possibility closes, and you miss your true calling.
This is not to say that students from elite colleges never pursue a riskier or less lucrative course after graduation, but even when they do, they tend to give up more quickly than others. (Let’s not even talk about the possibility of kids from privileged backgrounds not going to college at all, or delaying matriculation for several years, because however appropriate such choices might sometimes be, our rigid educational mentality places them outside the universe of possibility—the reason so many kids go sleepwalking off to college with no idea what they’re doing there.) This doesn’t seem to make sense, especially since students from elite schools tend to graduate with less debt and are more likely to be able to float by on family money for a while. I wasn’t aware of the phenomenon myself until I heard about it from a couple of graduate students in my department, one from Yale, one from Harvard.
They were talking about trying to write poetry, how friends of theirs from college called it quits within a year or two while people they know from less prestigious schools are still at it. Why should this be? Because students from elite schools expect success, and expect it now. They have, by definition, never experienced anything else, and their sense of self has been built around their ability to succeed. The idea of not being successful terrifies them, disorients them, defeats them. They’ve been driven their whole lives by a fear of failure—often, in the first instance, by their parents’ fear of failure. The first time I blew a test, I walked out of the room feeling like I no longer knew who I was. The second time, it was easier; I had started to learn that failure isn’t the end of the world.
But if you’re afraid to fail, you’re afraid to take risks, which begins to explain the final and most damning disadvantage of an elite education: that it is profoundly anti-intellectual. This will seem counterintuitive. Aren’t kids at elite schools the smartest ones around, at least in the narrow academic sense? Don’t they work harder than anyone else—indeed, harder than any previous generation? They are. They do. But being an intellectual is not the same as being smart. Being an intellectual means more than doing your homework.
If so few kids come to college understanding this, it is no wonder. They are products of a system that rarely asked them to think about something bigger than the next assignment. The system forgot to teach them, along the way to the prestige admissions and the lucrative jobs, that the most important achievements can’t be measured by a letter or a number or a name. It forgot that the true purpose of education is to make minds, not careers.
Being an intellectual means, first of all, being passionate about ideas—and not just for the duration of a semester, for the sake of pleasing the teacher, or for getting a good grade. A friend who teaches at the University of Connecticut once complained to me that his students don’t think for themselves. Well, I said, Yale students think for themselves, but only because they know we want them to. I’ve had many wonderful students at Yale and Columbia, bright, thoughtful, creative kids whom it’s been a pleasure to talk with and learn from. But most of them have seemed content to color within the lines that their education had marked out for them. Only a small minority have seen their education as part of a larger intellectual journey, have approached the work of the mind with a pilgrim soul. These few have tended to feel like freaks, not least because they get so little support from the university itself. Places like Yale, as one of them put it to me, are not conducive to searchers.
Places like Yale are simply not set up to help students ask the big questions. I don’t think there ever was a golden age of intellectualism in the American university, but in the 19th century students might at least have had a chance to hear such questions raised in chapel or in the literary societies and debating clubs that flourished on campus. Throughout much of the 20th century, with the growth of the humanistic ideal in American colleges, students might have encountered the big questions in the classrooms of professors possessed of a strong sense of pedagogic mission. Teachers like that still exist in this country, but the increasingly dire exigencies of academic professionalization have made them all but extinct at elite universities. Professors at top research institutions are valued exclusively for the quality of their scholarly work; time spent on teaching is time lost. If students want a conversion experience, they’re better off at a liberal arts college.
When elite universities boast that they teach their students how to think, they mean that they teach them the analytic and rhetorical skills necessary for success in law or medicine or science or business. But a humanistic education is supposed to mean something more than that, as universities still dimly feel. So when students get to college, they hear a couple of speeches telling them to ask the big questions, and when they graduate, they hear a couple more speeches telling them to ask the big questions. And in between, they spend four years taking courses that train them to ask the little questions—specialized courses, taught by specialized professors, aimed at specialized students.
Although the notion of breadth is implicit in the very idea of a liberal arts education, the admissions process increasingly selects for kids who have already begun to think of themselves in specialized terms—the junior journalist, the budding astronomer, the language prodigy. We are slouching, even at elite schools, toward a glorified form of vocational training.
Indeed, that seems to be exactly what those schools want.
There’s a reason elite schools speak of training leaders, not thinkers—holders of power, not its critics. An independent mind is independent of all allegiances, and elite schools, which get a large percentage of their budget from alumni giving, are strongly invested in fostering institutional loyalty. As another friend, a third-generation Yalie, says, the purpose of Yale College is to manufacture Yale alumni. Of course, for the system to work, those alumni need money. At Yale, the long-term drift of students away from majors in the humanities and basic sciences toward more practical ones like computer science and economics has been abetted by administrative indifference.
The college career office has little to say to students not interested in law, medicine, or business, and elite universities are not going to do anything to discourage the large percentage of their graduates who take their degrees to Wall Street. In fact, they’re showing them the way. The liberal arts university is becoming the corporate university, its center of gravity shifting to technical fields where scholarly expertise can be parlayed into lucrative business opportunities.
It’s no wonder that the few students who are passionate about ideas find themselves feeling isolated and confused. I was talking with one of them last year about his interest in the German Romantic idea of bildung, the upbuilding of the soul. But, he said—he was a senior at the time—it’s hard to build your soul when everyone around you is trying to sell theirs.
Yet there is a dimension of the intellectual life that lies above the passion for ideas, though so thoroughly has our culture been sanitized of it that it is hardly surprising if it was beyond the reach of even my most alert students. Since the idea of the intellectual emerged in the 18th century, it has had, at its core, a commitment to social transformation. Being an intellectual means thinking your way toward a vision of the good society and then trying to realize that vision by speaking truth to power. It means going into spiritual exile. It means foreswearing your allegiance, in lonely freedom, to God, to country, and to Yale. It takes more than just intellect; it takes imagination and courage. “I am not afraid to make a mistake,” Stephen Dedalus says, “even a great mistake, a lifelong mistake, and perhaps as long as eternity, too.”
Being an intellectual begins with thinking your way outside of your assumptions and the system that enforces them. But students who get into elite schools are precisely the ones who have best learned to work within the system, so it’s almost impossible for them to see outside it, to see that it’s even there. Long before they got to college, they turned themselves into world-class hoop-jumpers and teacher-pleasers, getting A’s in every class no matter how boring they found the teacher or how pointless the subject, racking up eight or 10 extracurricular activities no matter what else they wanted to do with their time.
Paradoxically, the situation may be better at second-tier schools and, in particular, again, at liberal arts colleges than at the most prestigious universities. Some students end up at second-tier schools because they’re exactly like students at Harvard or Yale, only less gifted or driven. But others end up there because they have a more independent spirit. They didn’t get straight A’s because they couldn’t be bothered to give everything in every class. They concentrated on the ones that meant the most to them or on a single strong extracurricular passion or on projects that had nothing to do with school or even with looking good on a college application. Maybe they just sat in their room, reading a lot and writing in their journal. These are the kinds of kids who are likely, once they get to college, to be more interested in the human spirit than in school spirit, and to think about leaving college bearing questions, not resumés.
I’ve been struck, during my time at Yale, by how similar everyone looks. You hardly see any hippies or punks or art-school types, and at a college that was known in the ’80s as the Gay Ivy, few out lesbians and no gender queers. The geeks don’t look all that geeky; the fashionable kids go in for understated elegance. Thirty-two flavors, all of them vanilla. The most elite schools have become places of a narrow and suffocating normalcy. Everyone feels pressure to maintain the kind of appearance—and affect—that go with achievement. (Dress for success, medicate for success.)
I know from long experience as an adviser that not every Yale student is appropriate and well-adjusted, which is exactly why it worries me that so many of them act that way. The tyranny of the normal must be very heavy in their lives. One consequence is that those who can’t get with the program (and they tend to be students from poorer backgrounds) often polarize in the opposite direction, flying off into extremes of disaffection and self-destruction. But another consequence has to do with the large majority who can get with the program.
I taught a class several years ago on the literature of friendship. One day we were discussing Virginia Woolf’s novel The Waves, which follows a group of friends from childhood to middle age. In high school, one of them falls in love with another boy. He thinks, “To whom can I expose the urgency of my own passion?…There is nobody—here among these grey arches, and moaning pigeons, and cheerful games and tradition and emulation, all so skilfully organised to prevent feeling alone.” A pretty good description of an elite college campus, including the part about never being allowed to feel alone.
What did my students think of this, I wanted to know? What does it mean to go to school at a place where you’re never alone?
Well, one of them said, I do feel uncomfortable sitting in my room by myself. Even when I have to write a paper, I do it at a friend’s. That same day, as it happened, another student gave a presentation on Emerson’s essay on friendship. Emerson says, he reported, that one of the purposes of friendship is to equip you for solitude. As I was asking my students what they thought that meant, one of them interrupted to say, wait a second, why do you need solitude in the first place? What can you do by yourself that you can’t do with a friend?
So there they were: one young person who had lost the capacity for solitude and another who couldn’t see the point of it. There’s been much talk of late about the loss of privacy, but equally calamitous is its corollary, the loss of solitude. It used to be that you couldn’t always get together with your friends even when you wanted to. Now that students are in constant electronic contact, they never have trouble finding each other. But it’s not as if their compulsive sociability is enabling them to develop deep friendships. “To whom can I expose the urgency of my own passion?”: my student was in her friend’s room writing a paper, not having a heart-to-heart. She probably didn’t have the time; indeed, other students told me they found their peers too busy for intimacy.
What happens when busyness and sociability leave no room for solitude? The ability to engage in introspection, I put it to my students that day, is the essential precondition for living an intellectual life, and the essential precondition for introspection is solitude. They took this in for a second, and then one of them said, with a dawning sense of self-awareness, “So are you saying that we’re all just, like, really excellent sheep?” Well, I don’t know. But I do know that the life of the mind is lived one mind at a time: one solitary, skeptical, resistant mind at a time. The best place to cultivate it is not within an educational system whose real purpose is to reproduce the class system.
The world that produced John Kerry and George Bush is indeed giving us our next generation of leaders. The kid who’s loading up on AP courses junior year or editing three campus publications while double-majoring, the kid whom everyone wants at their college or law school but no one wants in their classroom, the kid who doesn’t have a minute to breathe, let alone think, will soon be running a corporation or an institution or a government. She will have many achievements but little experience, great success but no vision. The disadvantage of an elite education is that it’s given us the elite we have, and the elite we’re going to have.
Me: two words: sour grapes.
Wednesday, October 27, 2010
What Stanley Fish Says About the Job of the Arts Professoriate
Professor, Do Your Job
Stanley Fish//Hoover Institutution policy review no. 150
The classroom is not your political platform.
Pick up the mission statement of almost any college or university, and you will find claims and ambitions that will lead you to think that it is the job of an institution of higher learning to cure every ill the world has ever known: not only illiteracy and cultural ignorance, which are at least in the ball-park, but poverty, war, racism, gender bias, bad character, discrimination, intolerance, environmental pollution, rampant capitalism, American imperialism, and the hegemony of Wal-Mart; and of course the list could be much longer.
Wesleyan University starts well by pledging to “cultivate a campus environment where students think critically, participate in constructive dialogue and engage in meaningful contemplation ” (although I’m not sure what meaningful contemplation is); but then we read of the intention to “foster awareness, respect, and appreciation for a diversity of experiences, interests, beliefs and identities. ” Awareness is okay; it’s important to know what’s out there. But why should students be taught to “respect” a diversity of interests, beliefs, and identities in advance of assessing them and taking their measure? The missing word here is “evaluate.”
That’s what intellectual work is all about, the evaluation, not the celebration, of interests, beliefs, and identities; after all, interests can be base, beliefs can be wrong, and identities are often irrelevant to an inquiry.
Teachers cannot — except serendipitously — fashion moral character or produce citizens of a certain temper.
Yale College’s statement also starts well by promising to seek students “of all backgrounds” and “to educate them through mental discipline,” but then mental discipline turns out to be instrumental to something even more valuable, the development of students ’ “moral, civic and creative capacities to the fullest.”
I’m all for moral, civic, and creative capacities, but I’m not sure that there is much I or anyone else could do as a teacher to develop them. Moral capacities (or their absence) have no relationship whatsoever to the reading of novels, or the running of statistical programs, or the execution of laboratory procedures, all of which can produce certain skills, but not moral states. Civic capacities — which mean, I suppose, the capacities that go along with responsible citizenship — won’t be acquired simply because you have learned about the basic structures of American government or read the Federalist papers (both good things to do).
You could ace all your political science and public policy courses and still drop out and go live in the woods or become the Unabomber. And as for creative capacities, there are courses in creative writing in liberal arts colleges, and colleges of fine arts offer instruction in painting, sculpture, pottery, photography, drafting, and the playing of a variety of musical instruments. But even when such courses are housed in liberal arts venues, they belong more to the world of professional instruction — if you want to make something, here’s how to do it — than to the world of academic interrogation.
I’m not saying that there is no connection at all between the successful practice of ethical, social, and political virtues and the courses of instruction listed in the college catalogue; it ’s always possible that something you come across or something a teacher says may strike a chord that sets you on a life path you might not otherwise have chosen. But these are contingent effects, and as contingent effects they cannot be designed and shouldn ’t be aimed at. (It’s not a good use of your time to aim at results you have only a random chance of producing.)
So what is it that institutions of higher learning are supposed to do? My answer is simple. College and university teachers can (legitimately) do two things:
1) introduce students to bodies of knowledge and traditions of inquiry that had not previously been part of their experience; and
2) equip those same students with the analytical skills — of argument, statistical modeling, laboratory procedure — that will enable them to move confidently within those traditions and to engage in independent research after a course is over.
What can be designed are courses that introduce students to a demarcated field, reading lists that reflect the current state of disciplinary knowledge, exams or experiments that test the ability of students to extend what they have studied to novel fact situations, and in-class exercises that provoke students to construct and solve problems on their own. The designing of these (and related) structures and devices makes sense in the context of an aim that is specific to the pedagogical task — the aim of passing on knowledge and conferring skills.
Teachers can, by virtue of their training and expertise, present complex materials in ways that make them accessible to novices. Teachers can also put students in possession of the analytical tools employed by up-to-date researchers in the field. But teachers cannot, except for a serendipity that by definition cannot be counted on, fashion moral character, or inculcate respect for others, or produce citizens of a certain temper. Or, rather, they cannot do these things unless they abandon the responsibilities that belong to them by contract in order to take up responsibilities that belong properly to others.
But if they do that, they will be practicing without a license and in all likelihood doing a bad job at a job they shouldn ’t be doing at all. When that happens — and unfortunately it does happen — everyone loses. The students lose because they’re not getting what they paid for (it will be said that they are getting more, but in fact they are getting less). The university loses because its resources have been appropriated for a nonacademic purpose.
Higher education loses, because it is precisely when teachers offer themselves as moralists, therapists, political counselors, and agents of global change rather than as pedagogues that those who are on the lookout for ways to discredit higher education (often as a preliminary to taking it over) see their chance.
Does this mean that questions of value and discussion of current issues must be banished from the classroom? Not at all. No question, issue, or topic is off limits to classroom discussion so long as it is the object of academic rather than political or ideological attention. To many this will seem a difficult, if not impossible, distinction. On the contrary, as we will see, it is an easy one.
The necessity of academicizing
Afaculty committee report submitted long ago to the president of the University of Chicago declares that the university exists “only for the limited . . . purposes of teaching and research” and reasons that “since the university is a community only for those limited and distinctive purposes, it is a community which cannot take collective action on the issues of the day without endangering the conditions for its existence and effectiveness ” (Kalven Committee Report on the University’s Role in Political and Social Action, November 11, 1967).
Of course it can and should take collective (and individual) action on those issues relevant to the educational mission — the integrity of scholarship, the evil of plagiarism, and the value of a liberal education. Indeed failure to pronounce early and often on these matters would constitute a dereliction of duty. But neither the university as a collective nor its faculty as individuals should advocate personal, political, moral, or any other kind of views except academic views.
The only advocacy that should go on in the classroom is the advocacy of the intellectual virtues.
The only advocacy that should go on in the classroom is the advocacy of what James Murphy has identified as the intellectual virtues, “thoroughness, perseverance, intellectual honesty,” all components of the cardinal academic virtue of being “conscientious in the pursuit of truth” (“Good Students and Good Citizens,” New York Times, September 15, 2002).
A recent Harris Poll revealed that in the public’s eye teachers are the professionals most likely to tell the truth; and this means, I think, that telling the truth is what the public expects us to be doing. If you ’re not in the pursuit-of-truth business, you should not be in the university.
There are many objections to this severe account of what academics should and shouldn ’t do, but one is almost always raised — how do you draw the line? Even if your intentions are good, how do you refrain from inadvertently raising inappropriate issues in the classroom?
I call this the objection of impossibility, which takes two forms. One form says that teachers come to the classroom as fully developed beings who have undergone certain courses of instruction, joined political parties, embraced or refused religious allegiances, pledged themselves to various causes, and been persuaded to the truth of any number of moral or ideological propositions. In short, teachers believe something, indeed many things, and wouldn ’t it be impossible for them to detach themselves from these formative beliefs and perform in a purely academic manner?
Wouldn ’t the judgments they offered and the conclusions they reached be influenced, if not largely determined, by the commitments I say they should set aside?
This objection contrives to turn the unavailability of purity — which I certainly acknowledge — into the impossibility of making distinctions between contexts and the behaviors appropriate to them. Even if it is the case that whatever we do is shaped to some extent by what we ’ve done in the past, that past is filtered through the conventional differences by which we typically organize our daily lives.
We understand, for example, that proper behavior at the opera differs from proper behavior at a ball game, and we understand too that proper behavior at the family dinner table differs from proper behavior at a corporate lunch. It would be possible to trace our actions in all of these contexts back to decisions made and allegiances formed long ago, but those actions would still be distinguishable from one another by the usual measures that mark off one social context from another.
The fact that we bring a signature style, fashioned over many years, to whatever we do does not mean that we are always doing the same thing. We are perfectly capable of acting in accordance with the norms that belong to our present sphere of activity, even if our “take” on those norms is inflected somewhat by norms we affirm elsewhere.
We manage to refrain from inserting our private obsessions into every conversation or situation.
But is it so easy to compartmentalize one’s beliefs and commitments? Yes it is. In fact, we do it all the time when we refrain, for example, from inserting our religious beliefs or our private obsessions into every situation or conversation no matter what its content. Those who cannot or will not so refrain are shunned by their neighbors and made the object of satires by authors like Swift and Dickens. Setting aside the convictions that impel us in our political lives in order to take up the task of teaching (itself anchored by convictions, but ones specific to its performance) is not at all impossible, and if we fail to do it, it is not because we could not help ourselves, but because we have made a deliberate choice to be unprofessional.
The second form of the impossibility objection asserts that there can be no distinction between politics and the academy because everything is political. It is the objection that in many courses, especially courses given at a law school or by political science departments, the materials being studied are fraught with political, social, ethical, moral, and religious implications. How can those materials be taught at all without crossing the line I have drawn? Should they be excluded or allowed in only if they have first been edited so that the substantive parts are cut out? Not at all.
I am not urging a restriction on content — any ideology, agenda, even crusade is an appropriate object of study. Rather I am urging a restriction on what is done with the content when it is brought into the classroom. If an idea or a policy is presented as a candidate for allegiance — aided by the instructor, students are to decide where they stand on the matter — then the classroom has been appropriated for partisan purposes.
But if an idea or a policy is subjected to a certain kind of interrogation — what is its history? how has it changed over time? who are its prominent proponents? what are the arguments for and against it? with what other policies is it usually packaged? — then its partisan thrust will have been blunted, for it will have become an object of analysis rather than an object of affection.
In the fall of 2004, my freshman students and I analyzed a speech of John Kerry’s and found it confused, contradictory, inchoate, and weak. Six weeks later I went out and voted for John Kerry. What I was doing in class was subjecting Kerry ’s arguments to an academic interrogation. Do they hang together? Are they coherent? Do they respond to the issues? Are they likely to be persuasive? He flunked. But when I stepped into the ballot box, I was asking another set of questions:
Does Kerry represent or speak for interests close to mine?
Whom would he bring into his administration?
What are likely to be his foreign policy initiatives?
How does he stand on the environment?
The answers I gave to the first set of academic questions had no relationship whatsoever to the answers I gave to the second set of political questions.
Whether it is a person or a policy, it makes perfect sense to approve it in one venue and disapprove it in another.
Whether it is a person or a policy, it makes perfect sense to approve it in one venue and disapprove it in another, and vice versa. You could decide that despite the lack of skill with which a policy was defended (an academic conclusion), it was nevertheless the right policy for the country (a political decision). In the classroom, you can probe the policy ’s history; you can explore its philosophical lineage; you can examine its implications and likely consequences, but you can ’t urge it on your students. Everything depends on keeping these two judgments, and the activities that generate them, separate.
It might be objected that while it may be easy to remain within academic bounds when the debate is about the right interpretation of Paradise Lost, the line between the academic and the political has been blurred before the discussion begins when the subject is ethics and students are arguing, for example, about whether stem cell research is a good or bad idea. But students shouldn ’t be arguing about whether stem cell research is a good or bad idea.
They should be studying the arguments various parties have made about stem cell research. Even in a class focused on ethical questions, the distinction I would enforce holds. Analyzing ethical issues is one thing; deciding them is another, and only the first is an appropriate academic activity. Again, I do not mean to exclude political topics from the classroom, but to insist that when political topics are introduced, they not be taught politically, that is, with a view to either affirming or rejecting a particular political position.
The name I give to this process whereby politically explosive issues are made into subjects of intellectual inquiry is “academicizing.” To academicize a topic is to detach it from the context of its real world urgency, where there is a vote to be taken or an agenda to be embraced, and insert it into a context of academic urgency, where there is an account to be offered or an analysis to be performed.
Consider as an example the Terry Schiavo tragedy. How can this event in our national history be taught without taking sides on the issues it raises? Again, simple: Discuss it as a contemporary instance of a tension that has structured American political thought from the founders to John Rawls — the tension between substantive justice, justice rooted in a strong sense of absolute right and wrong, and procedural justice, justice tied to formal rules that stipulate the steps to be taken and the persons authorized to take them. On one side were those who asked the question: what is the morally right thing to do about Terry Schiavo?
On the other side there were those who asked the question: who is legally entitled to make the relevant decisions independently of whether or not we think those decisions morally justified? Once these two positions are identified, their sources can be located in the work of Locke, Kant, Mill, Isaiah Berlin, and others, and the relationship between those sources and the Schiavo incident can become the focus of analysis. As this is happening — as the subject is being academicized — there will be less and less pressure in the class to come down on one side or the other and more and more pressure to describe accurately and fully the historical and philosophical antecedents of both sides.
A political imperative will have been replaced by an academic one. There is no topic, however politically charged, that will resist academicization. Not only is it possible to depoliticize issues that have obvious political content; it is easy.
How do you know whether or not you are really academicizing? Just apply a simple test: am I asking my students to produce or assess an account of a vexed political issue, or am I asking my students to pronounce on the issue? Some cases are easy. The writing instructor who appended to his syllabus on Palestinian poetics the admonition “Conservative students should seek instruction elsewhere” was obviously defaulting on his academic responsibilities.
So are those professors who skip a class in order to participate in a political rally; even if their students are not encouraged to attend the rally, a message is being sent, and it is the wrong message. Some teachers announce their political allegiances up front and believe by doing so they inoculate their students against the danger of indoctrination. But the political affiliations of a teacher will be irrelevant if political questions are analyzed rather than decided in the classroom. Coming clean about your own partisan preferences might seem a way of avoiding politics, but it sends the message that in this class political judgments will be part of what ’s going on, and again that is the wrong message.
The institutional message
The wrong message can be sent by institutions as well as by those they employ. The basic test of any action contemplated by a university should take the form of a simple question: Has the decision to do this (or not do this) been reached on educational grounds? Let ’s suppose the issue is whether or not a university should fund a program of intercollegiate athletics. Some will say “yes” and argue that athletics contributes to the academic mission; others will say “no” and argue that it doesn’t. If the question is decided in the affirmative, all other questions — should we have football? Should we sell sweatshirts? should we have a marching band? — are business questions and should be decided in business terms, not in terms of global equity.
Once the university has committed itself to an athletic program it has also committed itself to making it as profitable as possible, if only because the profits, if there are any, will be turned into scholarships for student athletes and others.
I don’t mean to exclude political topics from the classroom, but to insist that when they are taught, they not be taught politically.
The same reasoning applies to investment strategies. It is the obligation of the investment managers to secure the best possible return; it is not their obligation to secure political or social or economic justice. They may wish to do those things as private citizens or as members of an investment club, but as university officers their duty is to grow the endowment by any legal means available. The argument holds also for those in charge of maintenance and facilities. The goal should be to employ the best workers at the lowest possible wages. The goal should not be to redress economic disparities by unilaterally paying more than the market demands.
When a university sets wages, it sets wages, period (sometimes a cigar is just a cigar). The action has its own internal-to-the-enterprise shape, and while one could always abstract away from the enterprise to some larger context in which the specificity of actions performed within it disappears and everything one does is “taking a stand,” it is hard to see that anything is gained except a certain fuzziness of reference. The logic — the logic of the slogan “everything is political” — is too capacious, for it amounts to saying that whenever anyone does anything, he or she is coming down on one side or another of a political controversy and “taking a stand.”
But there is a difference between a self-consciously political act (such as the one my wife performs when she refuses to purchase goods manufactured by companies engaged in or benefiting from research on animals) and an act performed with no political intention at all, although it, inevitably, has a political effect (at least by some very generous definition of what goes into the political). Universities can pay wages with two intentions: ( 1) to secure workers, whether faculty or staff, who do the job that is required and do it well and (2) to improve the lot of the laboring class.
The first intention has nothing to do with politics and everything to do with the size of the labor pool, the law of supply and demand, current practices in the industry, etc. The second intention has everything to do with politics — the university is saying, “here we declare our position on one of the great issues of the day” — and it is not an intention appropriate to an educational institution. Nor is it appropriate for universities to divest their funds because they morally disapprove of countries or companies.
If universities must distance themselves from any entity that has been accused of being ethically challenged, there will be a very long list of people, companies, and industries they will have to renounce as business partners: brokerage firms, pharmaceutical firms, online-gambling companies, oil companies, automobile manufacturers, real-estate developers, cosmetic companies, fast-food restaurants, Hewlett-Packard, Microsoft, Wal-Mart, Target, Martha Stewart, Richard Grasso, and George Steinbrenner. And if you ’re going to spurn companies involved with Sudan, what about North Korea, Iran, Syria, China, Colombia, the Dominican Republic, Venezuela, Argentina, Russia, Israel, and (in the eyes of many left-leaning academics) the United States?
These lists are hardly exhaustive and growing daily. Taking only from the pure will prove to be an expensive proposition (even Walt Disney won ’t survive the cut) and time consuming too, as the university becomes an extension of Human Rights Watch.
But if you take their money, aren’t you endorsing their ethics and in effect becoming a partner in their crimes? No. If you take their money, you ’re taking their money. That’s all. The crimes they may have committed will be dealt with elsewhere, and as long as the funds have not been impounded and are in fact legally the possession of those who offer them, the act of accepting them signifies nothing more than appreciation of the gift and the intention to put it to good academic use.
So are there no circumstances in which a university should decline funds offered to it, except the circumstance of money legally (not morally) dirty? Yes, there is one — when the funds come with strings attached, when the donor says these are the conclusions I want you to reach, or these are the faculty I want you to hire, or these are the subjects I want you to teach or stop teaching.
Every university already has a rule against accepting donations so encumbered, and it is a matter of record that tobacco companies abide by this restriction and do not expect (although they may hope) that their contributions will produce results friendly to their cause.
What’s left?
But wouldn’t a university uninvolved in the great issues of the day be a place without passion, where classrooms were bereft of lively discussion and debate? Definitely not. While the urgency of the political question will fade in the classroom I have imagined, it will have become a far livelier classroom as a result. In the classrooms I have in mind, passions run high as students argue about whether the religion clause of the First Amendment, properly interpreted, forbids student-organized prayers at football games, or whether the Rawlsian notion of constructing a regime of rights from behind a “veil of ignorance” makes sense, or whether the anthropological study of a culture inevitability undermines its integrity.
I have seen students discussing these and similar matters if not close to coming to blows then very close to jumping up and down and pumping their fists. These students are far from apathetic or detached, but what they are attached to (this again is the crucial difference) is the truth of the position to which they have been persuaded, and while that truth, strongly held, might lead at some later time to a decision to go out and work for a candidate or a policy, deciding that is not what is going on in the classroom.
The implicit assumption in the classroom as I envision it is that truth, and the seeking of truth, must always be defended.
By invoking the criterion of truth, I’ve already answered the objection that an academicized classroom — a classroom where political and moral agendas are analyzed, not embraced — would be value-free and relativistic. If anything is a value, truth is, and the implicit (and sometimes explicit) assumption in the classroom as I envision it is that truth, and the seeking of truth, must always be defended. To be sure, truth is not the only value and there are others that should be defended in the contexts to which they are central; but truth is a pre-eminent academic value, and adherence to it is exactly the opposite of moral relativism.
You will never hear in any of my classes the some-people-say-x-but-others-say-y-and-who’s-to-judge dance. What I strive to determine, together with my students, is which of the competing accounts of a matter (an academic not a political matter) is the right one and which are wrong. “Right” and “wrong” are not in the lexicon of moral relativism, and the students who deliver them as judgments do so with a commitment as great as any they might have to a burning social issue.
Students who are asked to compare the models of heroism on display in the Iliad, the Aeneid, and Wordsworth’s Prelude, or to chart the changes in the legal understanding of what the founders meant when they enjoined Congress from establishing a religion, will engage in discussions that are at least as animated as any they might have in the dorm room about some pressing issue of the day. It is only if you forget that academic questions have histories, and that those histories have investments, and that those investments are often cross- and interdisciplinary that you could make the mistake of thinking that confining yourself to them and resisting the lure of supposedly “larger” questions would make for an experience without spirit and energy.
Not only is the genuinely academic classroom full of passion and commitment; it is more interesting than the alternative.
The really dull classroom would be the one in which a bunch of 19- or 20-year-olds debate assisted suicide, physician-prescribed marijuana, or the war in Iraq in response to the question “What do you think?” Sure, lots of students would say things, but what they would say would be completely predictable — a mini-version of what you hear on the Sunday talk shows — in short, a rehearsing of opinions.
Meanwhile the genuine excitement of an academic discussion where you have a chance of learning something, as opposed to just blurting out uninformed opinions, will have been lost. What teacher and student are jointly after is knowledge, and the question should never be “What do you think?” (unless you’re a social scientist conducting a survey designed to capture public opinion). The question should be “What is the truth?” and the answer must stand up against challenges involving (among other things) the quality and quantity of evidence, the cogency of arguments, the soundness of conclusions, and so forth.
At the (temporary) end of the process, both students and teachers will have learned something they didn ’t know before (you always know what your opinions are; that’s why it’s so easy to have them) and they will have learned it by exercising their cognitive capacities in ways that leave them exhilarated and not merely self-satisfied. Opinion-sharing sessions are like junk food: they fill you up with starch and leave you feeling both sated and hungry. A sustained inquiry into the truth of a matter is an almost athletic experience; it may exhaust you, but it also improves you.
What’s the use?
It will not improve you, however, in ways that make you a better person or a better citizen.
A good liberal arts course is not good because it tells you what to do when you next step into the ballot box or negotiate a contract. A good liberal arts course is good because it introduces you to questions you did not know how to ask and provides you with the skills necessary to answer them, at least provisionally. And what do you do with the answers you arrive at? What do you do with the habits of thought that have become yours after four or more years of discussing the mind/body problem, or the structure of dna, or Firmat’s theorem, or the causes of World War I?
Beats me! As far as I can tell those habits of thought and the liberal arts education that provides them don ’t enable you to do anything, and, even worse, neither do they prevent you from doing anything.
The view I am offering of higher education is properly called deflationary; it takes the air out of some inflated balloons. It denies to teaching the moral and philosophical pretensions that lead practitioners to envision themselves as agents of change or as the designers of a “transformative experience,” a phrase I intensely dislike. I acknowledge a sense in which education can be transformative. A good course may transform a student who knew little about the material in the beginning into a student who knows something about it at the end. That ’s about all the transformation you should or could count on. Although the debates about what goes on in our colleges and universities are often conducted as if large moral, philosophical, and even theological matters are at stake, what is really at stake, more often than not, is a matter of administrative judgment with respect to professional behavior and job performance.
Teaching is a job, and what it requires is not a superior sensibility or a purity of heart and intention — excellent teachers can be absolutely terrible human beings, and exemplary human beings can be terrible teachers — but mastery of a craft. Teachers who prefer grandiose claims and ambitions to that craft are the ones who diminish it and render it unworthy.
A convenient summary of the grandiose claims often made for teaching can be found in an issue of the journal Liberal Education. Here are some sentences from that issue:
A classroom that teaches the virtues of critical analysis and respectful debate can go at least some way to form citizens for a more deliberative democracy.
A liberal arts college or university that helps young people to learn to speak in their own voices and to respect the voices of others will have done a great deal to produce thoughtful and potentially creative world citizens.
The aims of a strong liberal education include . . . shaping ethical judgment and a capacity for insight and concern for others.
Contemporary liberal education must look beyond the classroom to the challenges of the community, the complexities of the workplace, and the major issues in the world.
Students need to be equipped for living in a world where moral decisions must be made.
To which I respond, no, no, no, no, and no. A classroom that teaches critical analysis (sometimes called “critical thinking,” a phrase without content) will produce students who can do critical analysis; and those students, no matter how skillfully analytical they have become, will not by virtue of that skill be inclined to “respect the voices of others.” Learning how to perform in the game of argument is no guarantee either of the quality or of the morality of the arguments you go on to make.
Bad arguments, bad decisions, bad actions are as available to the members of Phi Beta Kappa as they are available to the members of street gangs. And moreover, as I said earlier, respecting the voices of others is not even a good idea. You shouldn ’t respect the voices of others simply because they are others (that’s the mistake of doctrinaire multiculturalism); you should respect the voices of those others whose arguments and recommendations you find coherent and persuasive.
And as for ethical judgment in general, no doubt everything you encounter helps to shape it, but reading novels by Henry James is not a special key to achieving it; and indeed — and there are many examples of this in the world — readers of Henry James or Sylvia Plath or Toni Morrison can be as vile and as cruel and as treacherous as anyone else. And if students “need to be equipped for living in a world where moral decisions must be made,” they’d better seek the equipment elsewhere, perhaps from their parents, or their churches, or their synagogues, or their mosques.
Nor can I agree that “contemporary liberal education must look beyond the classroom to the challenges of the community ”; for it is only one short step from this imperative to the assertion that what goes on in the liberal arts classroom is merely preliminary to what lies beyond it, one short step to the judgment that what goes on in the liberal arts classroom acquires its value from what happens elsewhere; and then it is no step at all to conclude that what goes on in the liberal arts classroom can only be justified by an extracurricular payoff.
You might make your students into good researchers. You can’t make them into good people, and you shouldn’t try.
And here we come to the heart of the matter, the justification of liberal education. You know the questions: Will it benefit the economy? Will it fashion an informed citizenry? Will it advance the cause of justice? Will it advance anything?
Once again the answer is no, no, no, and no.
At some level of course, everything we ultimately do has some relationship to the education we have received. But if liberal arts education is doing its job and not the job assigned to some other institution, it will not have as its aim the bringing about of particular effects in the world. Particular effects may follow, but if they do, it will be as the unintended consequences of an enterprise which, if it is to remain true to itself, must be entirely self-referential, must be stuck on itself, must have no answer whatsoever to the question, “what good is it?”
In a wonderful essay titled “What Plato Would Allow” (Nomos37, 1995), political theorist Jeremy Waldron muses about the appropriate response to someone who asks of philosophers, “What’s the point of your work?” or “What difference is it going to make?” He replies (and I agree completely with him) that “we are not really doing . . . philosophy, and thus paradoxically . . . we are probably not really being of much use, unless we are largely at a loss as to how to answer that question. ”
An activity whose value is internal to its performance will have unpredictable and unintended effects in the world outside the classroom. But precisely because they are unpredictable and unintended, it is a mistake to base one ’s teaching on the hope of achieving them.
If by the end of a semester you have given your students an overview of the subject (as defined by the course ’s title and description in the catalogue) and introduced them to the latest developments in the field and pointed them in the directions they might follow should they wish to inquire further, then you have done your job. What they subsequently do with what you have done is their business and not anything you should be either held to account for or praised for. (Charlton Heston once said to Lawrence Olivier, “I’ve finally learned to ignore the bad reviews.” “Fine,” Olivier replied, “now learn to ignore the good ones.”)
The question of what you are responsible for is also the question of what you should aim for, and what you should aim for is what you can aim for — that is, what you can reasonably set out to do as opposed to what is simply not within your power to do. You can reasonably set out to put your students in possession of a set of materials and equip them with a set of skills (interpretive, computational, laboratory, archival), and even perhaps (although this one is really iffy) instill in them the same love of the subject that inspires your pedagogical efforts.
You won ’t always succeed in accomplishing these things — even with the best of intentions and lesson plans there will always be inattentive or distracted students, frequently absent students, unprepared students, and on-another-planet students — but at least you will have a fighting chance given the fact that you’ve got them locked in a room with you for a few hours every week for four months.
You have little chance (and that entirely a matter of serendipity), however, of determining what they will make of what you have offered them once the room is unlocked for the last time and they escape first into the space of someone else ’s obsession and then into the space of the wide, wide world.
And you have no chance at all (short of a discipleship that is itself suspect and dangerous) of determining what their behavior and values will be in those aspects of their lives that are not, in the strict sense of the word, academic. You might just make them into good researchers. You can ’t make them into good people, and you shouldn’t try
Stanley Fish//Hoover Institutution policy review no. 150
The classroom is not your political platform.
Pick up the mission statement of almost any college or university, and you will find claims and ambitions that will lead you to think that it is the job of an institution of higher learning to cure every ill the world has ever known: not only illiteracy and cultural ignorance, which are at least in the ball-park, but poverty, war, racism, gender bias, bad character, discrimination, intolerance, environmental pollution, rampant capitalism, American imperialism, and the hegemony of Wal-Mart; and of course the list could be much longer.
Wesleyan University starts well by pledging to “cultivate a campus environment where students think critically, participate in constructive dialogue and engage in meaningful contemplation ” (although I’m not sure what meaningful contemplation is); but then we read of the intention to “foster awareness, respect, and appreciation for a diversity of experiences, interests, beliefs and identities. ” Awareness is okay; it’s important to know what’s out there. But why should students be taught to “respect” a diversity of interests, beliefs, and identities in advance of assessing them and taking their measure? The missing word here is “evaluate.”
That’s what intellectual work is all about, the evaluation, not the celebration, of interests, beliefs, and identities; after all, interests can be base, beliefs can be wrong, and identities are often irrelevant to an inquiry.
Teachers cannot — except serendipitously — fashion moral character or produce citizens of a certain temper.
Yale College’s statement also starts well by promising to seek students “of all backgrounds” and “to educate them through mental discipline,” but then mental discipline turns out to be instrumental to something even more valuable, the development of students ’ “moral, civic and creative capacities to the fullest.”
I’m all for moral, civic, and creative capacities, but I’m not sure that there is much I or anyone else could do as a teacher to develop them. Moral capacities (or their absence) have no relationship whatsoever to the reading of novels, or the running of statistical programs, or the execution of laboratory procedures, all of which can produce certain skills, but not moral states. Civic capacities — which mean, I suppose, the capacities that go along with responsible citizenship — won’t be acquired simply because you have learned about the basic structures of American government or read the Federalist papers (both good things to do).
You could ace all your political science and public policy courses and still drop out and go live in the woods or become the Unabomber. And as for creative capacities, there are courses in creative writing in liberal arts colleges, and colleges of fine arts offer instruction in painting, sculpture, pottery, photography, drafting, and the playing of a variety of musical instruments. But even when such courses are housed in liberal arts venues, they belong more to the world of professional instruction — if you want to make something, here’s how to do it — than to the world of academic interrogation.
I’m not saying that there is no connection at all between the successful practice of ethical, social, and political virtues and the courses of instruction listed in the college catalogue; it ’s always possible that something you come across or something a teacher says may strike a chord that sets you on a life path you might not otherwise have chosen. But these are contingent effects, and as contingent effects they cannot be designed and shouldn ’t be aimed at. (It’s not a good use of your time to aim at results you have only a random chance of producing.)
So what is it that institutions of higher learning are supposed to do? My answer is simple. College and university teachers can (legitimately) do two things:
1) introduce students to bodies of knowledge and traditions of inquiry that had not previously been part of their experience; and
2) equip those same students with the analytical skills — of argument, statistical modeling, laboratory procedure — that will enable them to move confidently within those traditions and to engage in independent research after a course is over.
What can be designed are courses that introduce students to a demarcated field, reading lists that reflect the current state of disciplinary knowledge, exams or experiments that test the ability of students to extend what they have studied to novel fact situations, and in-class exercises that provoke students to construct and solve problems on their own. The designing of these (and related) structures and devices makes sense in the context of an aim that is specific to the pedagogical task — the aim of passing on knowledge and conferring skills.
Teachers can, by virtue of their training and expertise, present complex materials in ways that make them accessible to novices. Teachers can also put students in possession of the analytical tools employed by up-to-date researchers in the field. But teachers cannot, except for a serendipity that by definition cannot be counted on, fashion moral character, or inculcate respect for others, or produce citizens of a certain temper. Or, rather, they cannot do these things unless they abandon the responsibilities that belong to them by contract in order to take up responsibilities that belong properly to others.
But if they do that, they will be practicing without a license and in all likelihood doing a bad job at a job they shouldn ’t be doing at all. When that happens — and unfortunately it does happen — everyone loses. The students lose because they’re not getting what they paid for (it will be said that they are getting more, but in fact they are getting less). The university loses because its resources have been appropriated for a nonacademic purpose.
Higher education loses, because it is precisely when teachers offer themselves as moralists, therapists, political counselors, and agents of global change rather than as pedagogues that those who are on the lookout for ways to discredit higher education (often as a preliminary to taking it over) see their chance.
Does this mean that questions of value and discussion of current issues must be banished from the classroom? Not at all. No question, issue, or topic is off limits to classroom discussion so long as it is the object of academic rather than political or ideological attention. To many this will seem a difficult, if not impossible, distinction. On the contrary, as we will see, it is an easy one.
The necessity of academicizing
Afaculty committee report submitted long ago to the president of the University of Chicago declares that the university exists “only for the limited . . . purposes of teaching and research” and reasons that “since the university is a community only for those limited and distinctive purposes, it is a community which cannot take collective action on the issues of the day without endangering the conditions for its existence and effectiveness ” (Kalven Committee Report on the University’s Role in Political and Social Action, November 11, 1967).
Of course it can and should take collective (and individual) action on those issues relevant to the educational mission — the integrity of scholarship, the evil of plagiarism, and the value of a liberal education. Indeed failure to pronounce early and often on these matters would constitute a dereliction of duty. But neither the university as a collective nor its faculty as individuals should advocate personal, political, moral, or any other kind of views except academic views.
The only advocacy that should go on in the classroom is the advocacy of the intellectual virtues.
The only advocacy that should go on in the classroom is the advocacy of what James Murphy has identified as the intellectual virtues, “thoroughness, perseverance, intellectual honesty,” all components of the cardinal academic virtue of being “conscientious in the pursuit of truth” (“Good Students and Good Citizens,” New York Times, September 15, 2002).
A recent Harris Poll revealed that in the public’s eye teachers are the professionals most likely to tell the truth; and this means, I think, that telling the truth is what the public expects us to be doing. If you ’re not in the pursuit-of-truth business, you should not be in the university.
There are many objections to this severe account of what academics should and shouldn ’t do, but one is almost always raised — how do you draw the line? Even if your intentions are good, how do you refrain from inadvertently raising inappropriate issues in the classroom?
I call this the objection of impossibility, which takes two forms. One form says that teachers come to the classroom as fully developed beings who have undergone certain courses of instruction, joined political parties, embraced or refused religious allegiances, pledged themselves to various causes, and been persuaded to the truth of any number of moral or ideological propositions. In short, teachers believe something, indeed many things, and wouldn ’t it be impossible for them to detach themselves from these formative beliefs and perform in a purely academic manner?
Wouldn ’t the judgments they offered and the conclusions they reached be influenced, if not largely determined, by the commitments I say they should set aside?
This objection contrives to turn the unavailability of purity — which I certainly acknowledge — into the impossibility of making distinctions between contexts and the behaviors appropriate to them. Even if it is the case that whatever we do is shaped to some extent by what we ’ve done in the past, that past is filtered through the conventional differences by which we typically organize our daily lives.
We understand, for example, that proper behavior at the opera differs from proper behavior at a ball game, and we understand too that proper behavior at the family dinner table differs from proper behavior at a corporate lunch. It would be possible to trace our actions in all of these contexts back to decisions made and allegiances formed long ago, but those actions would still be distinguishable from one another by the usual measures that mark off one social context from another.
The fact that we bring a signature style, fashioned over many years, to whatever we do does not mean that we are always doing the same thing. We are perfectly capable of acting in accordance with the norms that belong to our present sphere of activity, even if our “take” on those norms is inflected somewhat by norms we affirm elsewhere.
We manage to refrain from inserting our private obsessions into every conversation or situation.
But is it so easy to compartmentalize one’s beliefs and commitments? Yes it is. In fact, we do it all the time when we refrain, for example, from inserting our religious beliefs or our private obsessions into every situation or conversation no matter what its content. Those who cannot or will not so refrain are shunned by their neighbors and made the object of satires by authors like Swift and Dickens. Setting aside the convictions that impel us in our political lives in order to take up the task of teaching (itself anchored by convictions, but ones specific to its performance) is not at all impossible, and if we fail to do it, it is not because we could not help ourselves, but because we have made a deliberate choice to be unprofessional.
The second form of the impossibility objection asserts that there can be no distinction between politics and the academy because everything is political. It is the objection that in many courses, especially courses given at a law school or by political science departments, the materials being studied are fraught with political, social, ethical, moral, and religious implications. How can those materials be taught at all without crossing the line I have drawn? Should they be excluded or allowed in only if they have first been edited so that the substantive parts are cut out? Not at all.
I am not urging a restriction on content — any ideology, agenda, even crusade is an appropriate object of study. Rather I am urging a restriction on what is done with the content when it is brought into the classroom. If an idea or a policy is presented as a candidate for allegiance — aided by the instructor, students are to decide where they stand on the matter — then the classroom has been appropriated for partisan purposes.
But if an idea or a policy is subjected to a certain kind of interrogation — what is its history? how has it changed over time? who are its prominent proponents? what are the arguments for and against it? with what other policies is it usually packaged? — then its partisan thrust will have been blunted, for it will have become an object of analysis rather than an object of affection.
In the fall of 2004, my freshman students and I analyzed a speech of John Kerry’s and found it confused, contradictory, inchoate, and weak. Six weeks later I went out and voted for John Kerry. What I was doing in class was subjecting Kerry ’s arguments to an academic interrogation. Do they hang together? Are they coherent? Do they respond to the issues? Are they likely to be persuasive? He flunked. But when I stepped into the ballot box, I was asking another set of questions:
Does Kerry represent or speak for interests close to mine?
Whom would he bring into his administration?
What are likely to be his foreign policy initiatives?
How does he stand on the environment?
The answers I gave to the first set of academic questions had no relationship whatsoever to the answers I gave to the second set of political questions.
Whether it is a person or a policy, it makes perfect sense to approve it in one venue and disapprove it in another.
Whether it is a person or a policy, it makes perfect sense to approve it in one venue and disapprove it in another, and vice versa. You could decide that despite the lack of skill with which a policy was defended (an academic conclusion), it was nevertheless the right policy for the country (a political decision). In the classroom, you can probe the policy ’s history; you can explore its philosophical lineage; you can examine its implications and likely consequences, but you can ’t urge it on your students. Everything depends on keeping these two judgments, and the activities that generate them, separate.
It might be objected that while it may be easy to remain within academic bounds when the debate is about the right interpretation of Paradise Lost, the line between the academic and the political has been blurred before the discussion begins when the subject is ethics and students are arguing, for example, about whether stem cell research is a good or bad idea. But students shouldn ’t be arguing about whether stem cell research is a good or bad idea.
They should be studying the arguments various parties have made about stem cell research. Even in a class focused on ethical questions, the distinction I would enforce holds. Analyzing ethical issues is one thing; deciding them is another, and only the first is an appropriate academic activity. Again, I do not mean to exclude political topics from the classroom, but to insist that when political topics are introduced, they not be taught politically, that is, with a view to either affirming or rejecting a particular political position.
The name I give to this process whereby politically explosive issues are made into subjects of intellectual inquiry is “academicizing.” To academicize a topic is to detach it from the context of its real world urgency, where there is a vote to be taken or an agenda to be embraced, and insert it into a context of academic urgency, where there is an account to be offered or an analysis to be performed.
Consider as an example the Terry Schiavo tragedy. How can this event in our national history be taught without taking sides on the issues it raises? Again, simple: Discuss it as a contemporary instance of a tension that has structured American political thought from the founders to John Rawls — the tension between substantive justice, justice rooted in a strong sense of absolute right and wrong, and procedural justice, justice tied to formal rules that stipulate the steps to be taken and the persons authorized to take them. On one side were those who asked the question: what is the morally right thing to do about Terry Schiavo?
On the other side there were those who asked the question: who is legally entitled to make the relevant decisions independently of whether or not we think those decisions morally justified? Once these two positions are identified, their sources can be located in the work of Locke, Kant, Mill, Isaiah Berlin, and others, and the relationship between those sources and the Schiavo incident can become the focus of analysis. As this is happening — as the subject is being academicized — there will be less and less pressure in the class to come down on one side or the other and more and more pressure to describe accurately and fully the historical and philosophical antecedents of both sides.
A political imperative will have been replaced by an academic one. There is no topic, however politically charged, that will resist academicization. Not only is it possible to depoliticize issues that have obvious political content; it is easy.
How do you know whether or not you are really academicizing? Just apply a simple test: am I asking my students to produce or assess an account of a vexed political issue, or am I asking my students to pronounce on the issue? Some cases are easy. The writing instructor who appended to his syllabus on Palestinian poetics the admonition “Conservative students should seek instruction elsewhere” was obviously defaulting on his academic responsibilities.
So are those professors who skip a class in order to participate in a political rally; even if their students are not encouraged to attend the rally, a message is being sent, and it is the wrong message. Some teachers announce their political allegiances up front and believe by doing so they inoculate their students against the danger of indoctrination. But the political affiliations of a teacher will be irrelevant if political questions are analyzed rather than decided in the classroom. Coming clean about your own partisan preferences might seem a way of avoiding politics, but it sends the message that in this class political judgments will be part of what ’s going on, and again that is the wrong message.
The institutional message
The wrong message can be sent by institutions as well as by those they employ. The basic test of any action contemplated by a university should take the form of a simple question: Has the decision to do this (or not do this) been reached on educational grounds? Let ’s suppose the issue is whether or not a university should fund a program of intercollegiate athletics. Some will say “yes” and argue that athletics contributes to the academic mission; others will say “no” and argue that it doesn’t. If the question is decided in the affirmative, all other questions — should we have football? Should we sell sweatshirts? should we have a marching band? — are business questions and should be decided in business terms, not in terms of global equity.
Once the university has committed itself to an athletic program it has also committed itself to making it as profitable as possible, if only because the profits, if there are any, will be turned into scholarships for student athletes and others.
I don’t mean to exclude political topics from the classroom, but to insist that when they are taught, they not be taught politically.
The same reasoning applies to investment strategies. It is the obligation of the investment managers to secure the best possible return; it is not their obligation to secure political or social or economic justice. They may wish to do those things as private citizens or as members of an investment club, but as university officers their duty is to grow the endowment by any legal means available. The argument holds also for those in charge of maintenance and facilities. The goal should be to employ the best workers at the lowest possible wages. The goal should not be to redress economic disparities by unilaterally paying more than the market demands.
When a university sets wages, it sets wages, period (sometimes a cigar is just a cigar). The action has its own internal-to-the-enterprise shape, and while one could always abstract away from the enterprise to some larger context in which the specificity of actions performed within it disappears and everything one does is “taking a stand,” it is hard to see that anything is gained except a certain fuzziness of reference. The logic — the logic of the slogan “everything is political” — is too capacious, for it amounts to saying that whenever anyone does anything, he or she is coming down on one side or another of a political controversy and “taking a stand.”
But there is a difference between a self-consciously political act (such as the one my wife performs when she refuses to purchase goods manufactured by companies engaged in or benefiting from research on animals) and an act performed with no political intention at all, although it, inevitably, has a political effect (at least by some very generous definition of what goes into the political). Universities can pay wages with two intentions: ( 1) to secure workers, whether faculty or staff, who do the job that is required and do it well and (2) to improve the lot of the laboring class.
The first intention has nothing to do with politics and everything to do with the size of the labor pool, the law of supply and demand, current practices in the industry, etc. The second intention has everything to do with politics — the university is saying, “here we declare our position on one of the great issues of the day” — and it is not an intention appropriate to an educational institution. Nor is it appropriate for universities to divest their funds because they morally disapprove of countries or companies.
If universities must distance themselves from any entity that has been accused of being ethically challenged, there will be a very long list of people, companies, and industries they will have to renounce as business partners: brokerage firms, pharmaceutical firms, online-gambling companies, oil companies, automobile manufacturers, real-estate developers, cosmetic companies, fast-food restaurants, Hewlett-Packard, Microsoft, Wal-Mart, Target, Martha Stewart, Richard Grasso, and George Steinbrenner. And if you ’re going to spurn companies involved with Sudan, what about North Korea, Iran, Syria, China, Colombia, the Dominican Republic, Venezuela, Argentina, Russia, Israel, and (in the eyes of many left-leaning academics) the United States?
These lists are hardly exhaustive and growing daily. Taking only from the pure will prove to be an expensive proposition (even Walt Disney won ’t survive the cut) and time consuming too, as the university becomes an extension of Human Rights Watch.
But if you take their money, aren’t you endorsing their ethics and in effect becoming a partner in their crimes? No. If you take their money, you ’re taking their money. That’s all. The crimes they may have committed will be dealt with elsewhere, and as long as the funds have not been impounded and are in fact legally the possession of those who offer them, the act of accepting them signifies nothing more than appreciation of the gift and the intention to put it to good academic use.
So are there no circumstances in which a university should decline funds offered to it, except the circumstance of money legally (not morally) dirty? Yes, there is one — when the funds come with strings attached, when the donor says these are the conclusions I want you to reach, or these are the faculty I want you to hire, or these are the subjects I want you to teach or stop teaching.
Every university already has a rule against accepting donations so encumbered, and it is a matter of record that tobacco companies abide by this restriction and do not expect (although they may hope) that their contributions will produce results friendly to their cause.
What’s left?
But wouldn’t a university uninvolved in the great issues of the day be a place without passion, where classrooms were bereft of lively discussion and debate? Definitely not. While the urgency of the political question will fade in the classroom I have imagined, it will have become a far livelier classroom as a result. In the classrooms I have in mind, passions run high as students argue about whether the religion clause of the First Amendment, properly interpreted, forbids student-organized prayers at football games, or whether the Rawlsian notion of constructing a regime of rights from behind a “veil of ignorance” makes sense, or whether the anthropological study of a culture inevitability undermines its integrity.
I have seen students discussing these and similar matters if not close to coming to blows then very close to jumping up and down and pumping their fists. These students are far from apathetic or detached, but what they are attached to (this again is the crucial difference) is the truth of the position to which they have been persuaded, and while that truth, strongly held, might lead at some later time to a decision to go out and work for a candidate or a policy, deciding that is not what is going on in the classroom.
The implicit assumption in the classroom as I envision it is that truth, and the seeking of truth, must always be defended.
By invoking the criterion of truth, I’ve already answered the objection that an academicized classroom — a classroom where political and moral agendas are analyzed, not embraced — would be value-free and relativistic. If anything is a value, truth is, and the implicit (and sometimes explicit) assumption in the classroom as I envision it is that truth, and the seeking of truth, must always be defended. To be sure, truth is not the only value and there are others that should be defended in the contexts to which they are central; but truth is a pre-eminent academic value, and adherence to it is exactly the opposite of moral relativism.
You will never hear in any of my classes the some-people-say-x-but-others-say-y-and-who’s-to-judge dance. What I strive to determine, together with my students, is which of the competing accounts of a matter (an academic not a political matter) is the right one and which are wrong. “Right” and “wrong” are not in the lexicon of moral relativism, and the students who deliver them as judgments do so with a commitment as great as any they might have to a burning social issue.
Students who are asked to compare the models of heroism on display in the Iliad, the Aeneid, and Wordsworth’s Prelude, or to chart the changes in the legal understanding of what the founders meant when they enjoined Congress from establishing a religion, will engage in discussions that are at least as animated as any they might have in the dorm room about some pressing issue of the day. It is only if you forget that academic questions have histories, and that those histories have investments, and that those investments are often cross- and interdisciplinary that you could make the mistake of thinking that confining yourself to them and resisting the lure of supposedly “larger” questions would make for an experience without spirit and energy.
Not only is the genuinely academic classroom full of passion and commitment; it is more interesting than the alternative.
The really dull classroom would be the one in which a bunch of 19- or 20-year-olds debate assisted suicide, physician-prescribed marijuana, or the war in Iraq in response to the question “What do you think?” Sure, lots of students would say things, but what they would say would be completely predictable — a mini-version of what you hear on the Sunday talk shows — in short, a rehearsing of opinions.
Meanwhile the genuine excitement of an academic discussion where you have a chance of learning something, as opposed to just blurting out uninformed opinions, will have been lost. What teacher and student are jointly after is knowledge, and the question should never be “What do you think?” (unless you’re a social scientist conducting a survey designed to capture public opinion). The question should be “What is the truth?” and the answer must stand up against challenges involving (among other things) the quality and quantity of evidence, the cogency of arguments, the soundness of conclusions, and so forth.
At the (temporary) end of the process, both students and teachers will have learned something they didn ’t know before (you always know what your opinions are; that’s why it’s so easy to have them) and they will have learned it by exercising their cognitive capacities in ways that leave them exhilarated and not merely self-satisfied. Opinion-sharing sessions are like junk food: they fill you up with starch and leave you feeling both sated and hungry. A sustained inquiry into the truth of a matter is an almost athletic experience; it may exhaust you, but it also improves you.
What’s the use?
It will not improve you, however, in ways that make you a better person or a better citizen.
A good liberal arts course is not good because it tells you what to do when you next step into the ballot box or negotiate a contract. A good liberal arts course is good because it introduces you to questions you did not know how to ask and provides you with the skills necessary to answer them, at least provisionally. And what do you do with the answers you arrive at? What do you do with the habits of thought that have become yours after four or more years of discussing the mind/body problem, or the structure of dna, or Firmat’s theorem, or the causes of World War I?
Beats me! As far as I can tell those habits of thought and the liberal arts education that provides them don ’t enable you to do anything, and, even worse, neither do they prevent you from doing anything.
The view I am offering of higher education is properly called deflationary; it takes the air out of some inflated balloons. It denies to teaching the moral and philosophical pretensions that lead practitioners to envision themselves as agents of change or as the designers of a “transformative experience,” a phrase I intensely dislike. I acknowledge a sense in which education can be transformative. A good course may transform a student who knew little about the material in the beginning into a student who knows something about it at the end. That ’s about all the transformation you should or could count on. Although the debates about what goes on in our colleges and universities are often conducted as if large moral, philosophical, and even theological matters are at stake, what is really at stake, more often than not, is a matter of administrative judgment with respect to professional behavior and job performance.
Teaching is a job, and what it requires is not a superior sensibility or a purity of heart and intention — excellent teachers can be absolutely terrible human beings, and exemplary human beings can be terrible teachers — but mastery of a craft. Teachers who prefer grandiose claims and ambitions to that craft are the ones who diminish it and render it unworthy.
A convenient summary of the grandiose claims often made for teaching can be found in an issue of the journal Liberal Education. Here are some sentences from that issue:
A classroom that teaches the virtues of critical analysis and respectful debate can go at least some way to form citizens for a more deliberative democracy.
A liberal arts college or university that helps young people to learn to speak in their own voices and to respect the voices of others will have done a great deal to produce thoughtful and potentially creative world citizens.
The aims of a strong liberal education include . . . shaping ethical judgment and a capacity for insight and concern for others.
Contemporary liberal education must look beyond the classroom to the challenges of the community, the complexities of the workplace, and the major issues in the world.
Students need to be equipped for living in a world where moral decisions must be made.
To which I respond, no, no, no, no, and no. A classroom that teaches critical analysis (sometimes called “critical thinking,” a phrase without content) will produce students who can do critical analysis; and those students, no matter how skillfully analytical they have become, will not by virtue of that skill be inclined to “respect the voices of others.” Learning how to perform in the game of argument is no guarantee either of the quality or of the morality of the arguments you go on to make.
Bad arguments, bad decisions, bad actions are as available to the members of Phi Beta Kappa as they are available to the members of street gangs. And moreover, as I said earlier, respecting the voices of others is not even a good idea. You shouldn ’t respect the voices of others simply because they are others (that’s the mistake of doctrinaire multiculturalism); you should respect the voices of those others whose arguments and recommendations you find coherent and persuasive.
And as for ethical judgment in general, no doubt everything you encounter helps to shape it, but reading novels by Henry James is not a special key to achieving it; and indeed — and there are many examples of this in the world — readers of Henry James or Sylvia Plath or Toni Morrison can be as vile and as cruel and as treacherous as anyone else. And if students “need to be equipped for living in a world where moral decisions must be made,” they’d better seek the equipment elsewhere, perhaps from their parents, or their churches, or their synagogues, or their mosques.
Nor can I agree that “contemporary liberal education must look beyond the classroom to the challenges of the community ”; for it is only one short step from this imperative to the assertion that what goes on in the liberal arts classroom is merely preliminary to what lies beyond it, one short step to the judgment that what goes on in the liberal arts classroom acquires its value from what happens elsewhere; and then it is no step at all to conclude that what goes on in the liberal arts classroom can only be justified by an extracurricular payoff.
You might make your students into good researchers. You can’t make them into good people, and you shouldn’t try.
And here we come to the heart of the matter, the justification of liberal education. You know the questions: Will it benefit the economy? Will it fashion an informed citizenry? Will it advance the cause of justice? Will it advance anything?
Once again the answer is no, no, no, and no.
At some level of course, everything we ultimately do has some relationship to the education we have received. But if liberal arts education is doing its job and not the job assigned to some other institution, it will not have as its aim the bringing about of particular effects in the world. Particular effects may follow, but if they do, it will be as the unintended consequences of an enterprise which, if it is to remain true to itself, must be entirely self-referential, must be stuck on itself, must have no answer whatsoever to the question, “what good is it?”
In a wonderful essay titled “What Plato Would Allow” (Nomos37, 1995), political theorist Jeremy Waldron muses about the appropriate response to someone who asks of philosophers, “What’s the point of your work?” or “What difference is it going to make?” He replies (and I agree completely with him) that “we are not really doing . . . philosophy, and thus paradoxically . . . we are probably not really being of much use, unless we are largely at a loss as to how to answer that question. ”
An activity whose value is internal to its performance will have unpredictable and unintended effects in the world outside the classroom. But precisely because they are unpredictable and unintended, it is a mistake to base one ’s teaching on the hope of achieving them.
If by the end of a semester you have given your students an overview of the subject (as defined by the course ’s title and description in the catalogue) and introduced them to the latest developments in the field and pointed them in the directions they might follow should they wish to inquire further, then you have done your job. What they subsequently do with what you have done is their business and not anything you should be either held to account for or praised for. (Charlton Heston once said to Lawrence Olivier, “I’ve finally learned to ignore the bad reviews.” “Fine,” Olivier replied, “now learn to ignore the good ones.”)
The question of what you are responsible for is also the question of what you should aim for, and what you should aim for is what you can aim for — that is, what you can reasonably set out to do as opposed to what is simply not within your power to do. You can reasonably set out to put your students in possession of a set of materials and equip them with a set of skills (interpretive, computational, laboratory, archival), and even perhaps (although this one is really iffy) instill in them the same love of the subject that inspires your pedagogical efforts.
You won ’t always succeed in accomplishing these things — even with the best of intentions and lesson plans there will always be inattentive or distracted students, frequently absent students, unprepared students, and on-another-planet students — but at least you will have a fighting chance given the fact that you’ve got them locked in a room with you for a few hours every week for four months.
You have little chance (and that entirely a matter of serendipity), however, of determining what they will make of what you have offered them once the room is unlocked for the last time and they escape first into the space of someone else ’s obsession and then into the space of the wide, wide world.
And you have no chance at all (short of a discipleship that is itself suspect and dangerous) of determining what their behavior and values will be in those aspects of their lives that are not, in the strict sense of the word, academic. You might just make them into good researchers. You can ’t make them into good people, and you shouldn’t try
Subscribe to:
Posts (Atom)