On the Dismantling of American Culture
By David Mamet
241 pp. Sentinel. $27.95
NYT
DISCLAIMER WE make no warranty of any kind on the subject matter included or the completeness or accuracy of this website. We aren't liable for any actions (or lack) resulting from the information here. And in no event shall we be liable for any damages resulting from reliance on or use of this information. The posts at this site are for general information only. Get expert help when dealing with specific situations and do not rely on the information here.
Can you champion human rights while at the same time denying natural rights?
This is a core question of political philosophy. It was raised anew for me while re-reading Conversations with Isaiah Berlin, a dialogue with one of the 20th century’s leading political theorists and historian of ideas.
Professor Berlin, a man deeply committed to liberty and pluralism, resisted the idea one could apprehend “non-empirical, universal truths.” When asked by Ramin Jahanbegloo, the interviewer, how one can ground norms and values if one doesn’t believe in the rational method of justifying them, Berlin answered, “You don’t justify them. The norms don’t need justification, it is they which justify the rest, because they are basic.”
When pressed, Berlin admitted he doesn’t deny human rights. “I deny a priori lists of natural rights,” he said. “Of course, I don’t deny that there are general principles of behavior and human activity without which there cannot be a minimally decent society. But … I don’t think there is such a thing as direct non-empirical knowledge, intuition, inspection of eternal principles. Only universal human beliefs.”
When asked about Leo Strauss, Berlin said, “He could not get me to believe in eternal, immutable, absolute values, true for all men everywhere at all times, God-given Natural Law and the like. I cannot claim omniscience. Perhaps there is a world of eternal truths, values, which the magic eye of the true thinker can perceive – surely this can only belong to an elite to which I fear I have never been admitted.”
This conversation goes to the heart of an ancient question: In what is morality grounded? For Berlin, it was grounded in general principles of behavior and human activity, in norms, in a consensus of what constitutes decency and right and wrong. That can work for a time, as people act on an existing moral accordance and intuition. But in the end that is never enough. Norms need to be grounded in permanent rather than provisional truths. Otherwise, we have only our own cultural consensus on what constitutes human rights, which makes it next to impossible to define a universal set of such rights. It also means we have no good justification for telling other societies, or for that matter even our own children, why they should hold to our particular consensus.
As Michael Gerson and I argue in City of Man, philosophers have tried for centuries to formulate a firm, secular theory of human rights. None has gained broad, much less universal assent, and none seems equal to the challenge of Nietzsche: if God is really dead, what is to stop the radical, destructive human will?
Berlin’s theory – liberalism without natural rights – is hung on a peg in midair. To care for and to sacrifice for the rights of other human beings, merely because they are human beings, requires an immutable moral and even metaphysical basis.
So why do human beings possess inherent value? People of the Jewish and Christian faith have an answer: Men and women are created equal in worth, in the image of God. They believe in a human nature, which demands human rights.
Without some transcendent basis, human rights as a doctrine cannot defend itself from attack. Strauss understood the fallacy of historicism – the belief that all standards are determined by cultural circumstances and each society should be judged in its own terms rather than measured against a universal standard – was both self-contradictory and relativistic. For historicists there is no ground on which one could prefer a liberal regime over a totalitarian one. Everything, including justice, is arbitrary. “If all values are relative,” Strauss famously said, “then cannibalism is a matter of taste.” For Strauss, a refugee from Nazi Germany, this debate was not simply an abstract one.
In his interview with Jahanbegloo, Professor Berlin says of Hannah Arendt, “I am not ready to swallow her idea about the banality of the evil. It is false. The Nazis were not ‘banal.’ Eichmann deeply believed in what he did. I was, he admitted, at the center of his being.” Berlin admits to having been “hopelessly secular.” The values of the Enlightenment, what people like Voltaire, Helvetius, Holbach, and Condorcet preached, are “deeply sympathetic to me,” he said. “Maybe they were too narrow, and often wrong about the facts of human experience, but they were great liberators. They liberated people from horrors, obscurantism, fanaticism, monstrous views. They were against cruelty, they were against oppression.”
So was Isaiah Berlin, a man of great intellect and learning. He just couldn’t tell you why.
Me (briefly):
I, an atheist, agree with Wehner that morality must be grounded in axiomatic truth but I'd argue God doesn't comprise that truth. Rather, I'd argue that moral principles must be expanded universally and if they sustain their applicability so expanded they thereby establish themselves. Joesph Tussman argued for this methodology in his Appendix to Obligation and the Body Politic as a version of the categorical imperative. The golden rule, for example, so establishes itself.
I addressed this short note to Wehner:
...I enjoyed your brief treatment of this theme in your Contentions post. But I'd like to ask you this: you say, "Men and women are created equal in worth, in the image of God. They believe in a human nature, which demands human rights." Does your argument fail if God is taken away as a premise? And if your belief in God is just that, a belief, a leap of faith, a matter of faith, if not a leap, how are you any further ahead than was Berlin? For then, on your resolution, not truth but (mere?)belief and faith ground morality...
Sincerely,
Itzik Basman...
Pete Wehner:
Dear Itzik:
Thanks so much for your note and your very good, and profound, set of questions.
I think you’re right that a belief in transcendent truths requires, at some level, a belief in a transcendent source of such truths that has to be accepted on faith. But I don’t think that puts me on the same level as Berlin, who argued that there are no such truths and that social consensus is the only source of moral order. The fact is, what you call "a leap of faith" is the real source of moral order, and the notion that only arguments that can be reasoned all the way to the bottom without reference to God can be reliable arguments is exactly the problem I cite in Berlin.
God created reason, not the other way around. So it's not going to be possible to "remove God as a premise." To your contention that this view means that "not truth but belief grounds morality" I would say that belief in the truth grounds morality, but such belief cannot be fully arrived at by reason alone, because such truth is above our reason--if it were not then it could not be a check on our will. That does mean we can't have civilization without at least some general sense that our reason appeals to something above itself, which is my point.
Berlin did not believe in truth, he just believed in good behavior. But we wouldn't have good behavior if most people did not believe in truth--that is, if most people were not willing to make a leap of faith. He wanted the flower without the stem, but flowers don't last very long that way.
As ever,
Pete
Me:
Dear Pete:
Thank you for your thoughtful responses to my questions to you spurred on by your most interesting post in Contentions.
My note to you was so brief that my very brevity—not at all "the soul of wit”— belied the richly nuanced, complex depths of these issues. I didn’t want to prevail on you by going on for too long. In the same spirit of not wanting to prevail on you, let me offer a few further albeit less brief comments on your reply.
I may have to go to back to my Berlin but it’s not my understanding that he holds that there are no universal moral principles or that social consensus is the source of moral order, at least not as the latter proposition may be properly understood. Rather, he holds that there are ultimate moral principles derivable from a rational apprehension of the world but that they do not reduce themselves to any integrated, moral monism. He is a values pluralist.
He holds that there are similarly reasonable but sometimes irreconcilable ultimate values, always in flux and tension with each other, in his words, “an order of things which clashes and the constant need for conciliation, adjustment, balance, an order that is always in a condition of imperfect equilibrium, which is required to be maintained by conscious effort.” Berlin’s view is that that clash inexorably yields tragic trade offs.
So when Berlin speaks about something like social consensus as "the source" of social morality, he is, I think, speaking about society as the filter and imperfect adjuster of pluralistic principles into a (hopefully) working disequilibrium—for example the tensions between liberty and equality, a contrariety that Ronald Dworkin, a secular values monist, argued against Berlin, (and has just now put out a book enfolding the two into one in his conception of justice.) Berlin does not speak, as I understand him, of morality as a matter simply of popular will.
If there is no God, then God did not create reason. So, in a sense, I have just removed God as a premise. If I remove God as a premise, what am I left with as a source for morality? I am left with my reason and my intuitions, which lead me to understand by a kind of Kantian procedure of something like universalizing the consequences of what I hold to be the moral case whether that case can be made out. (Joseph Tussman has a good outline of this mode of applied categorical imperative reasoning in an appendix to His Obligation and the Body Politic.)
I’m not sure it’s helpful, Pete, to say “belief in the truth grounds morality.” You say that to counter my claim that if faith grounds morality, you are not further ahead than Berlin on your iteration of him (which is not to say, of course, that your and his arguments are parallel.) For you use “the truth” to fortify belief when you have only belief to say what “the truth” is. If there are ultimate moral principles—and I argue there are, as does Berlin, as I have read him—then they comprise truths independent of reason. But, again, our difference is your sourcing them in the object of your faith whereas I argue they can be apprehended by a combination of intuition and a certain mode of “right reasoning.”
So, finally, I think Berlin wants the flower and the stem. Of course, his botany is entirely different than yours.
It's entirely a pleasure briefly discussing this with you.
Regards,
Itzik
Last week I had a rather ambivalent experience at the London School of Economics which may point to something beyond the personal—indeed, about where Britain, and possibly Western Europe as a whole, are heading.
I was invited to lecture on the first Arab-Israeli war of 1948. A few hours earlier, a fire had broken out in a nearby building and Kingsway was sealed off, so the taxi dropped me off a few blocks away. As I walked down Kingsway, a major London thoroughfare, a small mob—I don't think any other word is appropriate—of some dozen Muslims, Arabs and their supporters, both men and women, surrounded me and, walking alongside me for several hundred yards as I advanced towards the building where the lecture was to take place, raucously harangued and bated me with cries of "fascist," "racist," "England should never have allowed you in," "you shouldn't be allowed to speak." Several spoke in broken, obviously newly acquired, English. Violence was thick in the air though none was actually used. Passersby looked on in astonishment, and perhaps shame, but it seemed the sight of angry bearded, caftaned Muslims was sufficient to deter any intervention. To me, it felt like Brownshirts in a street scene in 1920s Berlin—though on Kingsway no one, to the best of my recall, screamed the word "Jew."
In the lecture hall, after a cup of tea, the session, with an audience of some 350 students and others, passed remarkably smoothly. Entry required tickets, which were freely dispensed upon the provision of name and address. The LSE had beefed up security and several bobbies stood outside the building confronting the dozen or so demonstrators who held aloft placards stating "Benni Morris is a Fascist," "Go home," etc. Inside, in the lecture hall, surprisingly, there was absolute silence during my talk; you could have heard a pin drop. The Q and A session afterwards was by and large civilized, though several Muslim participants, including girls with scarves, displayed anger and dismissiveness. One asserted: "You are not an historian"; another, more delicately, suggested that the lecturer "professes to be a serious historian." However, the overwhleming majority of the audience was respectful and, in my view, appreciative (to judge by the volume of clapping at the end of the lecture and at the end of the Q and A), but a small minority jeered and clapped loudly when anti-Zionist questions or points were raised.
The manner of our exit from the lecture hall was also noteworthy. The chairman asked the audience to stay in their seats until the group on stage departed. I was ushered by the security team down an elevator and through a narrow basement passage full of kitchen stores and out a side entrance. Like an American president in a B-rated thriller.
Another disconcerting element in what went on in the lecture hall was the hosting LSE professor's brief introductory remarks, which failed completely to note the harrassment and intimidation (of which he had been made fully aware) of the lecturer on Kingsway, or to criticize them in any way. My assumption was that some were LSE students.
There was a sense that the chairman was deliberately displaying caution in view of the world in which he lives. Which brings me back to what happened on Kingsway.
Uncurbed, Muslim intimidation in the public domain of people they see as disagreeing with them is palpable and palpably affecting the British Christian majority among whom they live, indeed, cowing them into silence. One senses real fear (perhaps a corner was turned with the Muslim reactions around the world to the "Mohammed cartoons" and the responses in the West to these reactions.) Which, if true, is a sad indication of what is happening in the historic mother of democracies and may point to what is happening, and will increasingly happen, in Western Europe in general in the coming decades. (A video of the LSE talk is on the website. A Muslim cameraman also made a video of the mob scene on Kingsway and posted it on the web—but appears to have thought better of it and subsequently removed it.)
Carlin Romano/American Scholar/March 2011
How can it be that philosophy, the world’s oldest profession without climactic satisfactions, remains so ill-defined? No matter where you turn, from academic pronouncement to middlebrow mulling to literary speculation, the thumbnails of it differ.
For the great Harvard epistemologist W. V. Quine, philosophy meant philosophy of science, which, he famously declared, was “philosophy enough.” When former Economist executive editor Anthony Gottlieb boldly tried to wrap his arms around the history of the field from the ancient Greeks to the Renaissance in The Dream of Reason, he concluded that “there is no such thing as philosophy.”
Provocation accomplished, he then clarified the judgment: “The history of philosophy is more the history of a sharply inquisitive cast of mind than the history of a sharply defined discipline. The traditional image of it as a sort of meditative science of pure thought, strangely cut off from other subjects, is largely a trick of the historical light.” For French novelist Michel Houellebecq, in his recently published dialogues with French public intellectual Bernard-Henri Lévy, philosophy is, as American philosopher Richard Rorty asserted, “a genre of literature.” Houellebecq reports that he has “given up classifying it alongside rational certainty and placed it next to interpretations and narratives.”
Such uncertainties make it hard to decide which volumes officially stamped “philosophy” deserve the broadest attention in mass media. Two worthy old subgenres, nonetheless, often steal the limited spotlight at center stage. One is the historical overview—the classic example was Will Durant’s The Story of Philosophy, a best-selling Book-of-the-Month Club stalwart in its day. The other is the ambitious, theoretical tome—think of John Rawls’s A Theory of Justice—that promises, despite millennia of those nonclimaxes in the field, to settle forever some issue like “justice” or “beauty” or “the good.”
In the first half of 2011, James Miller won the Durant award with his Examined Lives: From Socrates to Nietzsche, which landed on the front page of The New York Times Book Review. The Rawls honoree was Ronald Dworkin, whose Justice for Hedgehogs won serious review attention for its claim that a terminally complex, pluralistic, much-contested concept—justice—is exactly what he says it is.
The former genre, it should be noted, flourishes even though the canon in philosophy is absurdly ossified in comparison with those of literature and history, meaning most new surveys of the field are as unadventurous as the ones before them. The latter genre continues to hoodwink well-meaning intellectuals outside academic philosophy despite a warning about system building issued long ago by the most charming of estranged professors in the field, George Santayana.
“Professional philosophers are usually only scholastics,” Santayana observed in his classic essay “The Genteel Tradition in American Philosophy,” describing them as “absorbed in defending some vested illusion or some eloquent idea. . . . They do not covet truth, but victory and the dispelling of their own doubts. What they defend is some system, that is, some view about the totality of things, of which men are actually ignorant. No system would ever have been framed if people had been simply interested in knowing what is true, whatever it may be.”
In his own contribution to the grand survey genre, Miller, a professor of politics at the New School for Social Research in New York City, offers a feast of conventional selection and judgment, and scant critical perspective. The lives of his 12 figures, or should we say 12 men—Socrates, Plato, Diogenes, Aristotle, Seneca, Augustine, Montaigne, Descartes, Rousseau, Kant, Emerson, Nietzsche—have been endlessly written about, and Miller proves a competent aggregator of key details.
Miller suggests at the outset that he’s bravely examining a countertradition to philosophy as currently understood in academe, that being “a purely technical discipline, revolving around specialized issues in semantics and logic.”
His boldness, however, is overstated. While such a technical approach still dominates many top research institutions, the “wisdom tradition” that Miller vaunts—those thinkers who care about how we should live our lives—commands its own share of the philosophy curriculum, or at least of humanities real estate. On this second front, Miller disappoints by presenting nothing new, but only potted bios of his greats, all done, admittedly, with the solid research of the professor Miller now is and the smooth style of the Newsweek journalist he used to be.
Socrates thus comes across as the usual cliché—an “impressive, even awe-inspiring moral figure,” with no attention to the severe criticisms of him (or at least Plato’s version of him) raised by everyone from younger classics scholars to I. F. Stone. The latter claimed that Socrates was a “snob” full of “immeasurable conceit” and “class prejudice,” an irritating questioner who “never questioned slavery” and “neglected the affairs of his family and his city to engage in constant conversation.”
Similarly, the word philosophy, whose meaning was severely contested in ancient Greece between Plato and Isocrates, is simply identified with Plato’s version (though Miller, to his credit, acknowledges the dispute in passing).
Miller writes that he’s “a historian by training, and facts matter to me.” But his ambition to distill the lives of so many figures apparently kept him from burrowing deep into the scholarly facts about any figure in particular, rendering his portraits, again, predictable. As with Socrates, his portrait of Rousseau exhibits little sense of the diverse views of the subject’s personality—say, by scholars such as Leo Damrosch and Maurice Cranston—even though earlier in his career Miller wrote a whole book on Rousseau.
Even more troubling, Miller’s justification for choosing 12 men—“They are all men, because philosophy before the twentieth century was overwhelmingly a vocation reserved for men”—is a preposterously blinkered judgment in this era of excellent scholarship on neglected women philosophers—see Mary Ellen Waithe’s multivolume History of Women Philosophers, or John Conley’s superb Suspicion of Virtue: Women Philosophers in Neoclassical France—and should have been challenged by someone who read the manuscript.
Examined Lives is, then, an exercise in the Higher Wikipedia, which is not meant to sound completely snide. As a readable introduction to its worthies, it’s fine. But those serious about exploring the philosophical tradition of pondering the exemplary life would be better advised to turn to the challenging work of the late French philosopher Pierre Hadot, particularly his Philosophy as a Way of Life.
If Miller’s book underwhelms by its timorous retailing of standard views, Ronald Dworkin’s Justice for Hedgehogs annoys because of its author’s trademark smugness. Long anointed as a kind of King of Jurisprudence by the New York Review of Books, bestowing on him a powerful, protected status among academics in that field, Dworkin specializes in the illusion of argumentative rigor, wed to a clear but colorless style.
Fellow philosopher of law (and federal judge) Richard Posner, wrote in his own book How Judges Think of Dworkin’s well-known position on judicial reasoning—that judges can find “right answers” in the law if they just think hard enough. He caustically observed, “Really what he has done is relabel his preferred policies ‘principles’ and urged judges to decide cases in accordance with those ‘principles.’”
One would expect a sophisticated philosopher to approach the concept of justice with humility. As the late American philosopher Robert C. Solomon observed: “What we call justice would not have been recognized as such in Homeric Greece or in the Athens of Plato and Aristotle 400 years later. It is very different from the sense of justice that one would find in feudal France, in the Florentine renaissance, or in the bourgeois London society of Jane Austen. It is very different, indeed, from the sense of justice one finds in contemporary Japan or Iran.”
But Dworkin, in Justice for Hedgehogs, sets out his fundamental principles and treats them as if they’re obvious and “mutually supporting.” As in his reasoning about judicial decision making, Dworkin rejects any form of relativism and argues that truth in morality is objective and can be shown to be so. The book’s title is a reference to Isaiah Berlin’s famous distinction, in “The Hedgehog and the Fox,” between the former, who knows one big thing, and the latter, who knows many little things.
Dworkin identifies with the hedgehog. He’s sure about one big thing—that there is a coherent unity among all human values—and his new book is the 79-year-old thinker’s final attempt to pull his whole theory together.
“I believe,” he writes in his opening “Baedeker,” or introduction, “that there are objective truths about value. I believe that some institutions really are unjust and some acts really are wrong no matter how many people believe that they are not.” Unfortunately, as in much of his work, Dworkin simply assumes that values held by well-educated, elite, liberal Westerners—for example, making one’s life a kind of work of art, respecting human dignity in one and all—are beyond question.
So, for instance, a fundamental shaping principle for Dworkin is that every life should be a “successful performance rather than a wasted opportunity”—that is, we should place extraordinary value on our own lives. Yet that’s a view shared around the world, more by aggressively careerist professionals than by humbler, selfless sorts.
Another supposed core principle is that we should, in a Kantian manner, treat all other people as ends rather than means, and show equal concern for them. It’s a lovely sentiment, and one to which we might wish to subscribe, but a variety of cultures would object to showing equal concern for the kind and the cruel, the industrious and the lazy, just as many would reject the priority on “authenticity” that Dworkin urges.
What passes for rigorous argument in Dworkin’s work is usually arbitrary, stipulative redefinition of concepts, regardless of their general use. So, for Dworkin, “ethics” and “morality” are two different things (the first is “the study of how to live well,” the second is “the study of how we must treat other people”). In similar fashion, he divides “liberty” and “freedom” and with the help of that legerdemain, makes one of Isaiah Berlin’s signature claims—that liberty and equality inevitably clash—disappear. Dworkin’s notion of democracy, in turn, stresses an ideal of citizens as partners rather than competitors, surely one of his less plausible twists of meaning. Law, as always in Dworkin’s past work, becomes a “branch of morality.”
It’s not that one can’t prefer the way Dworkin articulates these notions—what irritates is his insinuation that any other understanding of them is wrong. He goes so far as to claim that even if no one existed to believe some of his fundamental judgments, they would still be true. He similarly contends that “we cannot defend a theory of justice without also defending, as part of the same enterprise, a theory of moral objectivity.” Even Rawls, particularly in his later work, did not take such a leap, notwithstanding the way that Dworkin, like Rawls, believes all our judgments must cohere in what Rawls called “reflective equilibrium.”
Alas, what Robert Solomon observed of prior justice theory might be applied to Dworkin’s massive new ahistorical effort as well: “The positions have been drawn, defined, refined, and redefined again. The qualifications have been qualified, the objections answered and answered again with more objections, and the ramifications further ramified and embellished. But the hope for a single, neutral, rational position has been thwarted every time. The attempt itself betrays incommensurable ideologies and unexamined subjective preferences. . . . We get no universal, strong, and complete system of justice.”
Does the wan familiarity of Miller’s reverent survey, and the colorless hubris of Dworkin’s fourth large treatise, suggest that nondevotees should pay less attention to philosophy’s surveyors and master builders? Here, the wisdom of Santayana comes in handy again. Any philosophical project, he asserted, regardless of the particular truths it may contain, must eventually be understood as “a work of imagination and a piece of human soliloquy.” Those who warm to the voices of Miller and Dworkin may find satisfaction here—others should look elsewhere.
On the Dismantling of American Culture
By David Mamet
241 pp. Sentinel. $27.95
NYT
This is an extraordinarily irritating book, written by one of those people who smugly believe that, having lost their faith, they must ipso facto have found their reason. In order to be persuaded by it, you would have to be open to propositions like this:
“Part of the left’s savage animus against Sarah Palin is attributable to her status not as a woman, neither as a Conservative, but as a Worker.”
Or this:
“America is a Christian country. Its Constitution is the distillation of the wisdom and experience of Christian men, in a tradition whose codification is the Bible.”
Some of David Mamet’s unqualified declarations are made even more tersely. On one page affirmative action is described as being “as injust as chattel slavery”; on another as being comparable to the Japanese internment and the Dred Scott decision. We learn that 1973 was the year the United States “won” the Vietnam War, and that Karl Marx — who on the evidence was somewhat more industrious than Sarah Palin — “never worked a day in his life.” Slackness or confusion might explain his reference to the Scottish-Canadian newspaper magnate Lord Beaverbrook as a Jewish courtier in the tradition of Disraeli and Kissinger, but it is more than ignorant to say of Bertrand Russell — author of one of the first reports from Moscow to analyze and excoriate Lenin — that he was a fellow-traveling dupe and tourist of the Jane Fonda style.
Propagandistic writing of this kind can be even more boring than it is irritating. For example, Mamet writes in “The Secret Knowledge” that “the Israelis would like to live in peace within their borders; the Arabs would like to kill them all.” Whatever one’s opinion of that conflict may be, this (twice-made) claim of his abolishes any need to analyze or even discuss it. It has a long way to go before it can even be called simplistic. By now, perhaps, you will not be surprised to know that Mamet regards global warming as a false alarm, and demands to be told “by what magical process” bumper stickers can “save whales, and free Tibet.” This again is not uncharacteristic of his pointlessly aggressive style: who on earth maintains that they can? If I were as prone to sloganizing as Mamet, I’d keep clear of bumper-sticker comparisons altogether.
On the epigraph page, and again on the closing one, Mamet purports to explain the title of his book. He cites the anthropologist Anna Simons on rites of initiation, to the effect that the big secret is very often that there is no big secret. In his own voice, he states: “There is no secret knowledge. The federal government is merely the zoning board writ large.” Again, it is hard to know with whom he is contending. Believers in arcane or esoteric or occult power are distributed all across the spectrum and would, I think, include Glenn Beck. Mr. Beck is among those thanked in Mamet’s acknowledgments for helping free him from “the bemused and sad paternalism” of the liberal airwaves. Would that this were the only sign of the deep confusion that is all that alleviates Mamet’s commitment to the one-dimensional or the flat-out partisan.
I am writing this review in the same week as I am conducting a rather exhausting exchange with Noam Chomsky in the pages of a small magazine. I have no difficulty in understanding why it is that former liberals and radicals become exasperated with the pieties of the left. I have taught at Berkeley and the New School, and I know what Mamet is on about when he evokes the dull atmosphere of campus correctness. Once or twice, as when he attacks feminists for their silence on Bill Clinton’s sleazy sex life, or points out how sinister it is that we use the word “czar” as a positive term for a political problem-solver, he is unquestionably right, or at least making a solid case. But then he writes: “The BP gulf oil leak . . . was bad. The leak of thousands of classified military documents by Julian Assange on WikiLeaks was good. Why?” This is merely lame, fails to compare like with like, appears unintentionally to be unsure why the gulf leak was “bad” and attempts an irony where none exists.
Irony is one of the elements of tragedy, a subject with which Mamet is much occupied. He has read — perhaps before Glenn Beck’s promotion of it on the air — Friedrich von Hayek’s classic defense of the market, “The Road to Serfdom.” (I would guess he has not read Hayek’s essay “Why I Am Not a Conservative.”) Briefly, Hayek identified what he called “the Tragic View” of the free market: the necessity of making difficult choices between competing goods. Classical economics had already defined this as “opportunity cost,” which is just as accurate but less tear-jerking. We have long known it under other maxims — “to govern is to choose” — or even under folkloric proverbs about having cakes and consuming them. But to Mamet, Hayek is the brilliant corrective to the evil of Franklin Roosevelt, who “dismantled the free market, and, so, the economy,” and shares this dismal record with Nazis, Stalinists and other “Socialists.” More recent collapses and crimes in the private capital sector, and the Bush-Obama rescue that followed, strike him as large steps in the same direction.
Mamet began the book more promisingly, by undertaking to review political disagreements between conservatives and liberals in the light of his own craft: “This opposition appealed to me as a dramatist. For a good drama aspires to be and a tragedy must be a depiction of a human interaction in which both antagonists are, arguably, in the right.”
That was certainly Hegel’s definition of what constituted a tragedy. From a playwright, however, one might also have expected some discussion of what the Attic tragedians thought: namely, that tragedy arises from the fatal flaw in some noble person or enterprise. This would have allowed Mamet to make excursions into the fields of irony and unintended consequences, which is precisely where many of the best critiques of utopianism have originated. Unfortunately, though, he shows himself tone-deaf to irony and unable to render a fair picture of what his opponents (and, sometimes, his preferred authorities, like Hayek) really believe. Quoting Deepak Chopra, of all people, as saying, “Our thinking and our behavior are always in anticipation of a response. It [sic] is therefore fear-based,” he seizes the chance to ask, “Is it too much to suggest that this quote contains the most basic prescription of liberalism, ‘Stop Thinking’?” On that evidence, yes, it would be a bit much.
Eschewing irony, Mamet prefers his precepts to be literal and traditional. In case by any chance we haven’t read it before, he twice offers Rabbi Hillel’s definition of the golden rule and the essence of Torah: “What is hateful to thee, do not do to thy neighbor.” As with Hayek’s imperative of choice, the apparent obviousness of this does not entirely redeem it from contradiction. To Colonel Qaddafi and Charles Manson and Bernard Madoff, I want things to happen that would be hateful to me. Of what use is a principle that is only as good as the person uttering it? About as much use as the (unnamed) “doyenne” of the American left who, according to Mamet, recommends always finding out what MoveOn.org thinks and does, and then thinking and doing it. That, I suspect, was a straw antagonist — with no chance at all of being, “arguably, in the right” — and this is a straw book, which looks for tragedy in all the wrong places
June 2011/Heavy sentences/The New Criterion
On How to Write a Sentence and How to Read One, by Stanley Fish.
Learning to write sound, interesting, sometimes elegant prose is the work of a lifetime. The only way I know to do it is to read a vast deal of the best writing available, prose and poetry, with keen attention, and find a way to make use of this reading in one’s own writing. The first step is to become a slow reader. No good writer is a fast reader, at least not of work with the standing of literature. Writers perforce read differently from everyone else. Most people ask three questions of what they read: (1) What is being said? (2) Does it interest me? (3) Is it well constructed? Writers also ask these questions, but two others along with them: (4) How did the author achieve the effects he has? And (5) What can I steal, properly camouflaged of course, from the best of what I am reading for my own writing? This can slow things down a good bit.
All sorts of people write books that promise shortcuts to writing well, most not particularly helpful, if only because shortcuts are not finally available. Over the years, I have consulted many of these books, on rare occasions taking away a helpful hint or two, but not much more. The most famous is Strunk and White’s Elements of Style, which is devoted to teaching the composition of prose clear, crisp, and clean of excess verbiage or tricky syntax, served up in what is called the active voice. Nothing wrong with clean, crisp, and clean prose, or with the active voice, but The Elements of Style is limited in its usefulness, if only because there are more ways of writing well than the ideal advocated by its authors. On the Strunk and White standard, for example, I suspect my opening sentence would have to be heavily edited, if not deleted.
The best book on the art of writing that I know is F. L. Lucas’s Style (1955). Lucas was a Cambridge don, a Greek scholar, and an excellent literary essayist, especially good on eighteenth-century writers, who wrote a once-famous book called The Decline and Fall of the Romantic Ideal. Style is filled with fine things, but the most useful to me in my own writing has been Lucas’s assertion that one does best always to attempt to use strong words to begin and end sentences. Straightaway this means eliminating the words “It” and “There” to begin sentences and dropping also the pompous “Indeed.” This advice also reinstates and gives new life to the old schoolmarmish rule about not ending a sentence with a preposition, for a prepositon is almost never a strong word.
F. L. Lucas wrote the best book on prose composition for the not-so-simple reason that, in the modern era, he was the smartest, most cultivated man to turn his energies to the task. He understood the element of magic entailed in great writing—and understood, too, that “faulty greatness in a writer stands above narrower perfections.” He also knew that in literature style can be a great preservative, and “how amazing it remains that . . . perfection of style can still do much to immortalize writers of the second magnitude like Horace and Virgil, Pope and Racine, and Flaubert himself.” Pause a moment to consider the wide reading required to have written that last sentence.
Style, according to F. L. Lucas, “is simply the effective use of language, especially in prose, whether to make statements or rouse emotions.” That it is, but it is also of course much more. Even though there have been national (English, French, German) and historical (baroque, rococo, plain) styles, style itself is not finally about ornamentation or its absence. In its subtlest sense style is a way of looking at the world, and an unusual or sophisticated way of doing so is not generally acquired early in life. This why good writers rarely arrive with the precocity of visual artists or musical composers or performers. Time is required to attain a point of view of sufficient depth to result in true style.
In a chapter boldly titled “The Foundation of Style—Character,” Lucas writes that “the beginning of style is character.” I write “boldly,” for what, one might at first think, has character to do with composition. “The fundamental thing,” Lucas explains, “is not technique, useful though that may be; if a writer’s personality repels, it will not avail to eschew split infinitives, to master the difference between ‘that’ and ‘which,’ to have Fowler’s Modern English Usage by heart. Soul is more than syntax.”
Lucas didn’t hold that good character will make an ungifted person write better, rather that without good character superior writing is impossible. And, in fact, most of the best prose writers in English have been men and women of exceedingly good character: Samuel Johnson, Edward Gibbon, Jane Austen, William Hazlitt, Charles Lamb, George Eliot, Matthew Arnold, Anthony Trollope, Henry James, Max Beerbohm, George Orwell, T. S. Eliot, Willa Cather. Even those excellent writers with less than good character—compose your own list here—seem to have been able to have faked good character, at least while at their desks.
F. L. Lucas fought and was wounded in World War I, opposed the British policy of appeasement, was properly skeptical of the Soviet Union, and, along with H. W. Fowler, had acquired the most interesting point of view of those who have attempted books on the art of writing well. A paragraph from Lucas’ first chapter, “The Value of Style,” will suffice to render his point of view, with its fine sense of perspective and proportion, plain:
It is unlikely that many of us will be famous, or even remembered. But not less important than the brilliant few that lead a nation or a literature to fresh achievements, are the unknown many whose patient efforts keep the world from running backward; who guard and maintain the ancient values, even if they do not conquer new; whose inconspicuous triumph it is to pass on what they inherited from their fathers, unimpaired and undiminished, to their sons. Enough, for almost all of us, if we can hand on the torch, and not let it down; content to win the affection, if it may be, of a few who know us and to be forgotten when they in their turn have vanished. The destiny of mankind is not governed wholly by its “stars.”
First day of class I used to tell students that I could not teach them to be observant, to love language, to acquire a sense of drama, to be critical of their own work, or almost anything else of significance that comprises the dear little demanding art of putting proper words in their proper places. I didn’t bring it up, lest I discourage them completely, but I certainly could not help them to gain either character or an interesting point of view. All I could do, really, was point out their mistakes, and, as someone who had read much more than they, show them several possibilities about deploying words into sentences, and sentences into paragraphs, of which they might have been not have been aware. Hence the Zenish koan with which I began: writing cannot be taught, but it can be learned.
In How to Write a Sentence and How to Read One, Stanley Fish, in his jauntily confident manner, promises much more.[1] Fish’s central key to good writing, his Open Sesame, is to master forms of sentences, which can be imitated and later used with one’s own content when one comes to write one’s own compositions. Form, form, form, he implores, it is everything. “You shall tie yourself to forms,” he writes, “and forms will set you free.”
By forms Stanley Fish means the syntactical models found in the sentences of good writers, or sometimes even in grabber lines from movies, or even interviews with movie stars: “If you want to see the girl next door,” he recounts Joan Crawford saying, “go next door.” He serves up John Updike’s sentence about Ted Williams’s last home run in Fenway Park—“It was in the books while it was still in the sky”—as a form that can be made use of in one’s own writing by wringing changes on the original. “It was in my stomach while it was still on the shelf” is Fish’s example of such a change.
Fish’s first bit of instruction is that one practice wringing changes on these forms, over and over again, as a beginning music student might practice scales. “It may sound paradoxical,” he writes, “but verbal fluency is the product of hours spent writing about nothing, just as musical fluency is the product of hours spent repeating scales.” He adds: “For the purposes of becoming a facile (in the positive sense) writer of sentences, the sentences you practice with should have as little meaning as possible.” Is this true? Taking the Updike sentence for my model, allow me to kitchen-test the method: “My toches was still in Chicago while my mind was in Biarritz”; “My mind was still in Vegas while my toches was in the Bodleian.” I fear it doesn’t do much for me, but perhaps I am too far gone for such warming-up exercises.
The larger point for Fish is that one learns to write
not by learning the rules [of grammar, syntax, and the rest], but by learning the limited number of relationships your words, phrases, and clauses can enter into, and becoming alert to those times when the relationships are not established or are unclear: when a phrase just dangles in space, when a connective has nothing to connect to, when a prepositional phrase is in search of a verb to complement, when a pronoun cannot be paired with a noun.
That ungainly Fishian sentence is of course itself built on reciting a few rules, but let that pass. The first thing that one might argue with in Stanley Fish’s method is that the number of relationships that words, phrases, and clauses can enter into is not limited, but nearly inexhaustible. In art, anyone writing a book on how to write ought to remember there are no rules except the rule that there are no rules. One does come upon a sentence with a fresh form from time to time, and makes a note to abscond with it, but learning the forms of sentences alone will not take one very far. The argument of How to Write a Sentence is that it will take you all the way: “Hence the formula [of this book]: sentence craft equals sentence comprehension equals sentence appreciation.”
Some well established sentences forms are, in fact, better neglected. Those sentences that begin with the word “Although,” or those sentences requiring a “however” somewhere in their middle, are almost always dead on arrival. If a form is imitable, it is probably stale, and hence best avoided. Superior writers do not seek out old forms. They create forms of their own devising.
Fish’s notion that “without form, content cannot emerge” is not very helpful either, and, except in the most blatant way, untrue. Content obviously needs to be given form, but in my experience it dictates form rather more than the other way round. Form too well fixed, in fact, is ripe for comic response. “The world is everything that is the case,” wrote Wittgenstein, “So stop your blubbering and wash your face,” added, several years later, the poet Donald Hall.
If one is to write a solid book on how to write, one ought on every page to demonstrate one’s own mastery of the skill. H. W. Fowler, the author of Modern English Usage, a writer with great powers of formulation, dressed out in witty peremptoriness, was easily able to do so. Here he is on the delicate matter of the split infinitive: “The English-speaking world is divided into (1) those who neither know nor care what a split infinitive is; (2) those who do not know, but care very much; (3) those who know and condemn; (4) those who know and approve; and (5) those who know and distinguish.”
Ernest Gowers, who revised Modern English Usage and wrote an excellent book called Plain Words intended to eliminate the pompo-verbosity of bureaucrats, commanded a fine common-sense style suitable to his message. Writing early in the history of the feminist incursions on language: Gowers noted: “chairperson and other new words ending in person have yet to win general approval. Meanwhile, it is safer for official writers to be cautiously conservative, and to take evasive action where possible.”
Stanley Fish is not a writer of this caliber. He is a fluent, sometimes a lively (for an academic), but finally an undistinguished writer. A self-advertised sophist, he is most at home in polemic. Sentence by sentence, this would-be connoisseur of sentences is insufficiently scrupulous. He often roams deep into cliché country. “You can talk the talk,” he writes, “but you can’t walk the walk.” Earlier he writes that “the very thought of putting pen to paper, an anachronism I find hard to let go of, is enough to bring on an anxiety attack.” An anachronism isn’t the same as a cliché, and pen to paper, as clichés go, is blue ribbon, and let go of it, gladly, Fish should have done. His diction, or word choice, is commonplace: those worn-out vogue words “focus,” “meaningful,” and “bottom line,” come to him all too readily. “But, far from being transparent and incisive,” he writes, “these declarations come wrapped in a fog; they seem to skate on their own surface and simply don’t go deep enough.” Take three metaphors, mix gently, sprinkle lightly with abstraction, and serve awkwardly. These infelicities are from Fish’s first twenty pages. Many more, to stay with my salad metaphor, are peppered throughout the book.
Unless one is considering aphorisms or maxims, the study of the sentence, by itself, has its severe limits. After one has charted simple, complex, and compound sentences, mentioned sentences dominated by subordinate clauses and sentences that are additive, or add one clause after another on their tail end, there isn’t all that much useful to say, except that one sentence is ill- and another well-made, one tone deaf and another sonorous.
Fish ignores the crucial fact that sentences owe their form and their language to their place in that larger entity, the paragraph. One cannot know the form one sentence is to take until having taken into cognizance the sentence, or sentences, that precede it. As the principle of poetry is—or once was—uniformity of meter, so the reigning principle of prose is variety, which means avoiding uniformity of syntax, rhythm, repetition of words, sameness of syntax. A sentence, every sentence, is a tile in a briefer or lengthier mosaic known as a paragraph. No sentence, like no man, as the poet said, is an island.
Here is a paragraph from that brilliant prose mosaicist Evelyn Waugh from his biography Ronald Knox, in which I can descry seven artless—which is to say perfectly artistic—sentences and no clear forms whatsoever:
Ronald had no desire to grow up. Adolescence, for him, was not a process of liberation or of adventure. Manhood threatened him with tedious duties and grave decisions. His mind had flourished and matured while his heart was still a child’s. He grew up slowly. Each stage of his growth imposed a burden; each enlargement of spirit, the loss of something fond. Perhaps some instinctive foreboding of the heaviness of the coming years years tinctured his love of Eton and sharpened his longing to delay.
The only sentences that stand alone—that is, that are not utterly dependent on what has come before them—are the first and, to a lesser extent, the last sentences in a composition. Fish defines the missions of first and last sentences thus: “First sentences . . . are promissory notes,” prefiguring what is to follow. Last sentences “can sum up, refuse to sum up, change the subject, leave you satisfied, leave you wanting more, put everything into perspective, or explode perspectives.” I should put it differently. Excellent first sentences are about seduction, seducing the reader, at a minimum, to read the second sentence. Fish chose to ignore the best first sentence in literature, which is Tolstoy’s in Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Often first sentences aren’t composed as first sentences at all, and rare, I should say, is the writer who has never had the experience of discovering that his initial first sentence was a misfit and that his composition starts better by opening with the sentence beginning his second or third or fourth paragraph.
As for last sentences, along with that of A Tale of Two Cities (“It is a far, far better thing that I do than I have ever done before; it is a far, far better rest that I go to, than I have ever known”) of which Stanley Fish doesn’t approve, the best in my opinion is that which ends on Madame Bovary: “Monsieur Homais received the legion of honor,” which signals a victory for all that its author wants his readers to despise. The task of a fine last sentence is to set the plane down safely, without any bumps, and the satisfying sense of a trip completed.
Stanley Fish refers to himself as a “sentence nut” and at one point refers to “the wonderful world of sentences,” reminding me of nothing so much as of the Erpi Classroom films of my boyhood, which contained lines like “Wonderful world of fungus,” with the public-school camera generally grinding to a slow breakdown on the word fungus. In his chapter “What Is A Good Sentence?” he neglects to tell us what, precisely, it is, perhaps because there is no convincing solitary answer. He teaches, as the old proverb has it, by example—the example of a few score sentences scattered through his book.
What these ostensibly exemplary sentences prove is that, in the realm of sentences, tastes differ. Fish exults over sentences by Leonard Michaels, D. H. Lawrence, Virgina Woolf, and Ralph Waldo Emerson that I find without power or charm. He cites an abstruse sentence from Joseph Conrad’s The Nigger of the “Narcissus” on the composition of sentences when much better, and more efficient, sentences were available to him in the preface to that same story: “My task which I am trying to achieve is, by the power of the written word, to make you hear, to make you feel—it is, before all, to make you see,” followed by the clincher close, “That—and no more, and it is everything.”
Fish quotes a few sentences by Gertrude Stein—he ends his book on her writing about the seduction of diagramming sentences—and is under the impression that she was a great writer. (He also quotes from that most useless, for the real writer, of essays, Edgar Allan Poe’s “The Philosophy of Composition.”) What Gertrude Stein attempted was to make prose do what the great avant-garde art of her day—that of painting—did, by writing in the continuous present and using boring repetitions as if filling in a canvas. She failed in her attempt, and a good thing, too, for English prose.
Perhaps the reason for the rather poor choices of so many of Fish’s sentences is that they allow him, in obeisance to his subtitle, which promises how also to read a sentence, to do rather elaborate riffs—explications du texte, in the old New Criticism phrase—on these sentences. While many of these heavy-breathing exercises allow Professor Fish to work himself up into a fine pedagogical lather, their chief effect on this reader is to remind him that it is good no longer to be a student.
I seem to have written more than three thousand words without a single kind one for How to Write a Sentence and How to Read One. To remedy this, at least partially, let it be noted that, at 165 pages, index and acknowledgments and biographical note on the author included, it is a short book.