Saturday, April 30, 2011
Death of Canadian Liberals
Michael Ignatieff, being a man of letters and cultivated intelligence, is quite likely familiar with George Dangerfield’s 1935 classic book, The Strange Death of Liberal England.
The years before, during and after the First World War, swallowed up the British Liberal Party in a “strange death,” strange because of “the approaching catastrophe of which the actors were unaware.”
Now, peering into the electoral abyss at the end of a campaign Mr. Ignatieff and his party so resolutely sought, Liberals might be witnessing their own “strange death,” or at least a new stage of political deterioration that spells the end of Liberal Canada.
Liberal Canada lasted a long time, from the election of Wilfrid Laurier in 1896 to Pierre Trudeau’s departure in 1984. A recovery ensued under Jean Chrétien for almost a decade starting in 1993, but the conservative forces were foolishly divided in those years and the Liberals themselves were corrosively split into two factions that time eventually made devastatingly public in the sponsorship scandal.
Liberal Canada’s singular contribution had been to keep French Quebeckers and other Canadians united in one country. In retrospect, the 1981-1982 patriation of the Constitution, engineered brilliantly by Mr. Trudeau, weakened that bridge by turning many francophones away from the Liberals. In reinforcing a country, he lost a large part of a province for his party.
The Liberals have not won a majority of Quebec’s seats in a general election since 1980, and now they’re reduced to a shrunken harvest of largely non-francophone voters. They are the party, honourably, that stands for a strong central government in a province that doesn’t want one.
Liberals thought they could count on the immigrant communities for whom Mr. Trudeau and his legacy were so popular. But when the Harper Conservatives began contesting some of those communities with sustained attention, changed policies and repeated blandishments, even this pillar of the shrunken Liberal coalition, already weakened by the long-ago departure of Western Canada and the more recent disaffection of Quebec, began to shake.
So, too, Liberals were being ousted from the industrial and northern cities of Ontario they had dominated for so long. And for a party that had pioneered protection for the official languages, they were even losing ground in French-speaking areas outside Quebec, such as Acadia, Eastern Ontario and St. Boniface.
The arching coalition of Liberal Canada, therefore, had been shrivelling for years, even if “the actors were unaware” of the unfolding decline. Mr. Ignatieff and his advisers convinced themselves that the anti-democratic tactics of the Harper Conservatives and economic uncertainties post-recession had made the electorate ready for a change, although there was little evidence of such a readiness.
As a student of Tolstoy’s War and Peace, Mr. Ignatieff forgot the lessons of the Russian general Kutusov, who waited and waited for events to destroy Napoleon, refusing to give battle until the French had been weakened by their own follies sufficiently to be defeated in combat.
The “approaching catastrophe” wasn’t what Mr. Ignatieff and his advisers had in mind when they precipitated an election the country mostly didn’t want. That they might be replaced by the NDP as the alternative to the Conservatives never crossed their minds, for when had that party climbed above 20 per cent in the polls?
They were confident that the more the country saw of Mr. Ignatieff, the more they’d admire him. But the reverse occurred, and some of those who couldn’t abide the Harper Conservatives turned to Jack Layton, who’d been around for almost a decade as NDP Leader and who kept repeating much of what he always said, a threat no Liberal took seriously until it was far too late.
Defeat will mean less public money and fewer private contributions. It will cost the party MPs, morale and purpose. Liberals have burned through three leaders in six years, convened a policy conference, tried campaigns of bold ideas and less courageous ones, and now can only recall through the mists of memory a time when there was a Liberal Canada.
Tuesday, April 26, 2011
Simon Blackburn on Stanley Fish's Book on Sentences
TNR/April 26, 2011
How To Write A Sentence And How To Read One
WE HUMANS CAN APPRECIATE many things. It is one of our most attractive qualities. How could Rodolfo not fall in love with Mimi as she sings her own rapture at the first sunshine after winter, the first kiss of April? In this small feast of a book Stanley Fish displays his love of the English sentence, and even without Puccini to help, his enthusiasm is seductive. His connoisseurship is broad and deep, his examples are often breathtaking, and his analyses of how the masterpieces achieve their effects are acute and compelling.
For Fish a great sentence is like a great athletic performance. It is an example of something done supremely well, so well that it cannot be bettered. Other similar feats will come along, but only to stand alongside it. What exactly is done in such a performance? There is no single answer, indeed no finite answer since there is no limit to the things that can be done with words. But it is what Conrad called the “shape and ring” of sentences, the perfect adaptation of form to achievement, that Fish wants to share.
It is wrong to think that the sentence is a mere slave, whose function is to bear content, which, while being the really important thing, is also something that could equally have been borne by another. Change the shape and ring, and you change everything. The balance, the alliterations, the variation, the melody, the lights glimmering in the words, can work together to transform even an ugly thought into something iridescent, as when Eli Wallach in The Magnificent Seven expressed his character’s indifference to the suffering he brings the peasants in one perfect, albeit perfectly brutal, sentence: “If God didn’t want them sheared, he would not have made them sheep.” As Fish says in his analysis of this example, here the “air of finality and certainty” is clinched by “the parallelism of clauses that also feature the patterned repetition of consonants and vowels” and then, of course the inevitability of that last dismissive word. If the devil has the best tunes, sometimes the bandits have the best sentences.
The philosopher Frege said that only in the context of a sentence do words have meaning. Fish agrees, as did Wittgenstein: “The world is everything that is the case.” In a sentence a sequence of words becomes more than just a list. It breathes and takes wing, becoming a messenger in one or more of our innumerable transactions with the world and with each other. It may pick up the past, or tilt us towards the future. Fish illustrates how if we become sensitive to the way forms can be used and repeated, then, when we have something to say, we are more likely to have the right instrument to hand with which to say it. I hope this is true. He singles out, for instance, John Updike’s remark about a great homerun—“It was in the books while it was still in the sky”—and asks, how hard is it to write a sentence like this? In one respect, not hard at all and he offers “it was in my stomach before it was off the shelf” as his “relatively feeble” attempt to “approach Updike’s art by imitating it.” Of course, this didactic technique only takes you so far. Fish has undoubtedly perceived the way that “it was still in the sky” works, helping the record books to set that passing moment forever in the amber of time, and it may seem a little forward to suppose that any liaison between “off the shelf” and “in my stomach” goes very far towards approaching it. But we have to learn with training-wheels.
Or perhaps it is enough if we simply attune ourselves to structure, balance, rhythm, and precision. Generations of English listeners had this sensitivity practiced weekly, as the great sonorous cadences of the King James Bible rolled over them: “And the earth was without form, and void, and darkness was upon the face of the deep …” Without that astonishing language, we would have had no Milton, no Johnson, no Wordsworth, no Lawrence, no Eliot. The last is particularly interesting, since the famous words beginning his “Journey of the Magi” are directly taken from a letter Lancelot Andrewes, one of the Jacobean translators, wrote to King James: “A cold comming they had of it, at this time of the yeare: just the worst time of the yeare, to take a journey, and specially a long journey, in. The ways deepe, the weather sharp, the daies short, the sunn farthest off…the very dead of winter.” Two sentences, admittedly, but what a pair. Presumably deliberately, Fish avoids examples from either Shakespeare or the King James, preferring instead the less familiar, but to my ear less accessible, solemnities of the later seventeenth century in Milton and Bunyan.
Do shape and ring matter? Perfection always matters. Without the sensitivity Fish admires, we would not only have no great literature. We would also have had no Gettysburg address, no Churchill, and no Martin Luther King, Jr. If we cannot move peoples’ souls, we cannot move their ways of living either: “Let me write the songs of a nation, and I care not who writes its laws.” Fish is sparing of reference to philosophers, but some of us turn a reasonable sentence. “What is at stake is far from insignificant: it is how one should live one’s life” says Socrates, in a sentence that itself exhibits the excellence to which it directs us. A perfect sentence may also be dangerous. “Man is born free, but is everywhere in chains” opens Rousseau’s Social Contract, and arguably the shape and ring of this sentence orchestrated the French Revolution, and even its decline into totalitarianism.
Perhaps philosophy is not much on show since often it sends not single sentences but battalions. One of Marx’s most famous sentences works by itself, but is much better when juxtaposed with its two predecessors: “Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”
Sentences matter, perhaps more than anything else, so I shall end with a contemporary application. As I write, the British government is imposing a duty on academics to show that their work has “impact,” which is to be a provable occurrence of social, economic, or political benefit, signed and witnessed within the last five years. This insanity could only have come about because not a single one of its perpetrators had read or understood one of Fish’s favorites, the final sentence of Middlemarch, contrasting Dorothea’s quiet future with the idealistic visions of doing good with which she started life: “But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts, and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.” Perfect.
Simon Blackburn is the Bertrand Russell Professor of philosophy at the University of Cambridge, and a Research Professor at the University of North Carolina, Chapel Hill.
Should a Law Degree Be a Prerequisite To Being a Lawyer?
Eliminate or minimize sharply law schools altogether as a necessary step to being an accredited lawyer and make passing the bar exams or whatever competency tests any jurisdiction lays down the sole condition of accreditation. Let applicants make their own ways to these tests whether by way of law schools, home schooling/self study, on line schools or whatever. What do law schools teach in the way of professional preparedness: ethics, black letter law competency in the usual areas, how to read cases and statutes, write and reason like a lawyer. Why should law schools have any monopoly on that and who's to say other modes of acquiring such knowledge and such skills can't do as good a job or better? If a counter argument to this proposal is "What about other professions, say doctors?", my answer is I don't know about them but I know something about the legal profession.
Monday, April 25, 2011
With a Friend Like Obama...
Abbas: Full Settlement Freeze Was Obama’s Invention
Omri Ceren/04.25.2011/Contentions
The background on Obama’s 2009 and 2010 diplomatic offensives against Israel are now well-known enough that the narrative is inching toward conventional wisdom. The president entered the White House intent on putting daylight between the United States and the Jewish state.
He choose settlements as a wedge issue designed to split Netanyahu from the Israeli public and topple the government, in the process changing the widely understood interpretation of “settlement freeze” from “no expansion outside existing blocs” to “no Jewish construction over the Green Line even in Jerusalem.” Either Netanyahu would halt all construction and lose the Israeli right, the thinking went, or he would put himself on the wrong side of the United States president and lose the Israeli center. Satisfyingly clever.
Of course the administration’s reading of Israeli polling data was flat wrong, and even Israeli opposition chairwoman Tzipi Livni insisted that Jerusalem was a consensus issue. The Israeli public rallied behind Netanyahu, while distrust in Obama and his reliability as an ally — a precondition to Israel taking risks for peace — skyrocketed.
But having categorically stated that it was simply impossible for the Palestinians to negotiate while Jews built schools and supermarkets in East Jerusalem, the White House couldn’t then admit that a “full freeze” was just a gambit meant to weaken Netanyahu. So that continued to be the official U.S. position through the end of 2010, until the White House had to nuance the counterproductive request. Of course by that time Palestinian negotiators, unable to be less anti-Israel than the U.S. president, had incorporated it as a precondition for talks. They didn’t have the option of abandoning it when the White House did, and the peace process remained moribund.
Again, this is all more or less conventional wisdom. Still, it’s nice to have confirmation:
[Abbas] told me bluntly that Obama had led him on, and then let him down by failing to keep pressure on Israeli Prime Minister Benjamin Netanyahu for a moratorium on settlement building in the West Bank last year. “It was Obama who suggested a full settlement freeze,” Abbas explained. “I said OK, I accept. We both went up the tree. After that, he came down with a ladder and he removed the ladder and said to me, jump. Three times he did it.”
The question, as always, isn’t just about the decision but about the decision-making process. Which obviously clumsy advisers convinced the president that the strategy was sound, and are they still prognosticating on Israeli calculations and Palestinian intentions? What obviously inaccurate assumptions were they using, and are those beliefs still guiding our Middle East policymaking? Because generally when someone charts a course that’s flawed in precisely predictable ways, when they dismiss those precise objections with specific justifications, and when they turn out to be precisely wrong — they generally get replaced. But there’s not much evidence that ever happened.
Of course it’s difficult to know from the outside where exactly things went awry, and who was making up which anti-Israel pretexts. The administration’s foreign policy is a hodgepodge of institutionalized ideology and wishful thinking, with various factions all vying for the president’s ear and trying to be unwittingly wrong in their own special way.
There are old peace-process hands who interpret obsolete data through outmoded preconceptions, and who suggest tactics that are too clever by half and misguided in full. There are anti-Israel Jewish activists who whine about exclusion while insisting that they represent American Jewry, and who leverage their access to the president to peddle fantasies about American Jewish sentiments.
There are multilateralists who resent having to defend our only stable Middle Eastern ally from global hostility, and gesture vaguely at ad-hoc international solutions and national credibility. There are diplomats and scholars whose institutional importance rises and falls as a function of the centrality of the Arab world, and who overstate the moderation of Arab governments while understating the pathology of the Arab Street.
And that’s before we get to the quotidian antipathy that many in the administration harbor toward the Israelis, an antipathy that apparently makes any anti-Israel reasoning — no matter how thin — seem like the height of sophistication.
A Take Down of Harold Bloom
By Carlin Romano/April 2011 Chronicle Review
If you look up "Bloom, Harold" under "author" in the University of Pennsylvania's main library catalog, the computer shoots back 846 entries. Most are his Chelsea House collections of critical essays on authors, each one "edited and with an introduction" by Harold Bloom.
Now 80, Yale's longtime Sterling Professor of the Humanities has knocked out volumes on, for starters, A.E. Housman, A.R. Ammons, Agatha Christie, Albert Camus, Aldous Huxley, Alexander Pope, Alexander Pushkin, Alexander Solzhenitsyn, Alfred Lord Tennyson, Alice Munro, Alice Walker. ...
I'd proceed right through the alphabet—or at least to the letter "B"—but my editor insists that I write some of this essay myself. Those volumes, of course, don't include the collections by Bloom, also "edited and with an introduction," on specific works of literature. He's written about Aeschylus's the Oresteia; Alan Paton's Cry, the Beloved Country; Albert Camus's The Stranger; Aldous Huxley's Brave New World; Alex Haley's The Autobiography of Malcolm X; Alexander Pope's The Rape of the Lock; Alice Walker's The Color Purple. ...
OK, OK, I'll stop. But what kind of cultural critic could feel comfortable mentioning just a fraction of the works of the scholar under review? And the books I've mentioned don't include the volumes by Bloom on categories of writers, such as African-American Poets; American and Canadian Women Poets: 1930-Present; American Fiction 1914-1945; American Naturalism; American Poetry 1915 to 1945; American Poetry 1946 to 1965; American Poetry Through 1914. ...
All right, all right—no more lists. I'm warmed up. How does one interpret such a Brobdingnagian output? It being nearly Passover as I write this, let me count three ways.
1. Professor Bloom began to need more money in the 1980s. His publisher, Chelsea House, gladly offered it to him, knowing that his name on volumes guaranteed library sales. Bloom organized what he called "the factory" in New Haven, employed 16 full-time staffers at one stage, as well as scores of freelance graduate students, and at his peak knocked out 15 books a month and three introductions a week.
2. Professor Bloom's singularly capacious mind and reading habits—he once responded to the "myth" that he could read 1,000 pages an hour by explaining he could read only 400—mean that he actually does know enough about all the writers, books, and literary categories on which he's edited essays to make this spectacle magnificent rather than tawdry. That is, he knows as much as we're entitled to expect a scholar to know when his or her name appears on such a collection.
3. Professor Bloom believes his prolific performance has increased his prestige and critical standing—already high thanks to his Yale chair, a MacArthur fellowship, and books such as A Map of Misreading and Kabbalah and Criticism—thus creating enhanced credibility for his own more substantive writing.
Those interested in the first interpretation should look up the biographical pieces in New York magazine, The New York Times, the Paris Review, and elsewhere. With the publication of Bloom's latest book, The Anatomy of Influence: Literature as a Way of Life (Yale University Press), it behooves us to weigh the second and third.
At the outset of Anatomy, Bloom tells us he'll be updating his controversial The Anxiety of Influence (Oxford University Press, 1973) by commenting on about 30 writers, all of whom he's written on before. A third of the book is devoted to Shakespeare, whom Bloom concedes has long been his "obsessive concern," sharing a godlike position with Emerson in Bloom's literary universe. But the first warning sign of the slapdash comes early: In just the first 30 pages, the author repetitively calls the book a "final reflection upon the influence process," his "final statement on the subject," his "virtual swan song," and his "last reflection upon influence."
The promise of updating also proves shaky.
In "The Point of View for My Work as a Critic," Bloom recalls yet again, as he has elsewhere in his work, his early infatuation with Hart Crane and William Blake, his appreciation of Samuel Johnson, his insistence that literature is life itself, his "possession" of much immortal literature by memory, his antipathy to nonaesthetic approaches to literature. In regard to the last, he makes clear his continued hostility to that "rabblement of lemmings" and "resentniks," as he has memorably labeled his critics, whose ranks he saw filled by "feminists, Marxists, purple-haired semioticians, new historicists, Lacanians, De Manians."
Also familiar rather than fresh is Bloom's characteristic need to clear his op-ed throat: "Twenty-first century America is in a state of decline," he reports.
"It is scary to reread the final volume of Gibbon these days because the fate of the Roman Empire seems an outline for the imperial presidency of George W. Bush retraced, and that continues even now. We have approached bankruptcy, fought wars we cannot pay for, and defrauded our urban and rural poor. Our troops include felons, and mercenaries of many nations are among our 'contractors,' fighting on their rules or none at all. Dark influences from the American past congregate among us still. If we are a democracy, what are we to make of the palpable elements of plutocracy, oligarchy, and mounting theocracy that rule our state? How do we address the self-inflicted catastrophes that devastate our natural environment? So large is our malaise that no single writer can encompass it. We have no Emerson or Whitman among us. An institutionalized counterculture condemns individuality as archaic and depreciates intellectual values, even in the universities."
When the Paris Review's interviewer in the early 90s asked Bloom who edited him, Bloom replied, "No one edits. I edit. I refuse to be edited." That still seems the case, with sad results.
Officially, Anatomy reappraises Bloom's The Anxiety of Influence—excoriated by poet Howard Nemerov as "nonsense"—which declared all poets to be intimidated by earlier poets and great ones to be committed to "misreadings" of predecessors as the price of originality. Bloom now calls his earlier book "a brief, gnomic theory of poetry, free of all history except literary biography." He concedes that "it is a hard read, even for me," because it carried "an undercurrent of foreboding" and constituted "an attempt to forge a weapon against the gathering storm of ideology that soon would sweep away many of my students."
In conceiving Anatomy as a chance to look back on his work, to wed his thinking about influence from 1967 to 1982 with "more public reflections" of the past decade, he wishes "to say in one place most of what I have learned to think about how influence works in imaginative literature." Now, he declares, "I define influence simply as literary love, tempered by defense."
Yet it's not clear that much has changed for Bloom. Literary folk in the trenches know that many poets don't care much about specific predecessors. Bloom counters that he's always talking about "an anxiety achieved in a literary work, whether or not its author ever felt it." So it's not necessarily a psychological state in the later poet. Even if you've never met a writer who feels "threatened by the prospect of imaginative death, of being entirely possessed by a precursor," Bloom is not necessarily wrong in his theory.
You don't have to be Karl Popper (who believed a thesis has to be falsifiable to be potentially true) to see the problem with Bloom's claim. When "students ask me why great writers cannot start out fresh, without any past at their back," Bloom confides, "I can only tell them that it just does not work that way." As the poet and critic John Hollander once remarked to The New York Times, "Harold is not particularly a good explainer."
Indeed, he's a consummate assumer. Bloom continues to posit art "as a contest for the foremost place," and agon, or conflict, as "a central feature of literary relationships," even though he knows that it is for some artists, and isn't for others. He's still proud to be an "incessant canonizer." He still talks of the "poet-in-a-poet," by which he means "his daemon, his potential immortality as a poet, and so in effect his divinity."
For the critic fond of literary crushes but eager for clear concepts, convincing evidence of links among writers, and philosophical coherence in shaping a canon of greatness, Bloom disappoints. Even those who admire aspects of his work express regrets. Alan Rawes and Jonathon Shears, coeditors of Reading, Writing and the Influence of Harold Bloom, concede that their essay collection must cover those "aspects of Bloom—the pedantry, conservatism, hysteria, and silliness—that so many readers have found in his work." Bloom's literary passion comes soaked in so much bile toward those who love literature differently that it seems a kind of personality disorder rather than healthy aesthetic judgment.
When he practices the fundamental critical task of interpreting one writer's link to another, we get arbitrary foolishness: that Leopardi's "possession" of Dante and Petrarch is "miraculous" rather than wholly natural, or that Milton suffers a "humbling defeat" at the hands of Hamlet—a character rather than an author.
Here is where all those hasty introductions to the Chelsea House editions revive in the mind, reminding one of Bloom's comfort with arbitrary promulgations and convenient ex cathedra salvos. Consider his introduction to the Zora Neale Hurston volume.
"Extra-literary factors have entered into the process of even secular canonization from Hellenistic Alexandria into the High Modernist Era of Eliot and Pound," he began, "so that it need not much dismay us if contemporary work by women and by minority writers becomes esteemed on grounds other than aesthetic."
A shabby lead, considering that Bloom actually liked Hurston—or at least Their Eyes Were Watching God. But could anyone not trapped behind elite Europhile glasses believe that "Nietzsche's vitalistic injunction, that we must try to live as though it were morning, is the implicit basis of Hurston's true religion"? The conceit made as much sense as Bloom's fey judgment, in his introduction to the Rudyard Kipling volume, that "Kipling writes in the rhetorical stance of an aesthete, and is very much a Paterian in the metaphysical sense." If Bloom's instant Chelsea House intros had been his comp exams, his first book might have been called The Flight to Another Career.
"It may be," Bloom writes at the outset of his new book, referring to Robert Burton's 17th-century The Anatomy of Melancholy, "that all I share with Burton is an obsessiveness somewhat parallel to his own." Agreed. In regard to Emerson, Bloom appears bent on the Concord sage's promise that "if the single man plant himself indomitably on his instincts, and there abide, the huge world will come round to him."
Bloom may have a wait. In his new book, he sounds grimly like the lit-crit equivalent of an unsteady Mideast autocrat, used to declaiming on whatever strikes his fancy, oblivious as his ritual pronouncements fall on deaf ears.
Me:
Edith Grossman in her introduction to her translation of Don Quixote on something Bloom had written about the novel, maybe his own introduction to it, said, and I paraphrase, "Beware, major gas bag alert!" Says it all by my lights, for all the big fella's prodgious output. Also, this is very enjoyable for putting pomposity in its place: http://tiny.cc/667le
Saturday, April 23, 2011
Stirring Words by Christopher Hitchens to American Atheists
Friday, 22 April 2011
Dear fellow-unbelievers,
Nothing would have kept me from joining you except the loss of my voice (at least my speaking voice) which in turn is due to a long argument I am currently having with the specter of death.
Nobody ever wins this argument, though there are some solid points to be made while the discussion goes on. I have found, as the enemy becomes more familiar, that all the special pleading for salvation, redemption and supernatural deliverance appears even more hollow and artificial to me than it did before.
I hope to help defend and pass on the lessons of this for many years to come, but for now I have found my trust better placed in two things: the skill and principle of advanced medical science, and the comradeship of innumerable friends and family, all of them immune to the false consolations of religion.
It is these forces among others which will speed the day when humanity emancipates itself from the mind-forged manacles of servility and superstitition. It is our innate solidarity, and not some despotism of the sky, which is the source of our morality and our sense of decency.
That essential sense of decency is outraged every day. Our theocratic enemy is in plain view. Protean in form, it extends from the overt menace of nuclear-armed mullahs to the insidious campaigns to have stultifying pseudo-science taught in American schools.
But in the past few years, there have been heartening signs of a genuine and spontaneous resistance to this sinister nonsense: a resistance which repudiates the right of bullies and tyrants to make the absurd claim that they have god on their side.
To have had a small part in this resistance has been the greatest honor of my lifetime: the pattern and original of all dictatorship is the surrender of reason to absolutism and the abandonment of critical, objective inquiry. The cheap name for this lethal delusion is religion, and we must learn new ways of combating it in the public sphere, just as we have learned to free ourselves of it in private.
Our weapons are the ironic mind against the literal: the open mind against the credulous; the courageous pursuit of truth against the fearful and abject forces who would set limits to investigation (and who stupidly claim that we already have all the truth we need).
Perhaps above all, we affirm life over the cults of death and human sacrifice and are afraid, not of inevitable death, but rather of a human life that is cramped and distorted by the pathetic need to offer mindless adulation, or the dismal belief that the laws of nature respond to wailings and incantations.
As the heirs of a secular revolution, American atheists have a special responsibility to defend and uphold the Constitution that patrols the boundary between Church and State. This, too, is an honor and a privilege. Believe me when I say that I am present with you, even if not corporeally (and only metaphorically in spirit...) Resolve to build up Mr Jefferson's wall of separation.
And don't keep the faith.
Sincerely
Christopher Hitchens
Me:
Amen!
On Racially Integrating American Schools
One of the most powerful tools for improving the educational achievement of poor black and Hispanic public school students is, regrettably, seldom even considered. It has become a political no-no.
Educators know that it is very difficult to get consistently good results in schools characterized by high concentrations of poverty. The best teachers tend to avoid such schools. Expectations regarding student achievement are frequently much lower, and there are lower levels of parental involvement.
These, of course, are the very schools in which so many black and Hispanic children are enrolled.
Breaking up these toxic concentrations of poverty would seem to be a logical and worthy goal. Long years of evidence show that poor kids of all ethnic backgrounds do better academically when they go to school with their more affluent — that is, middle class — peers. But when the poor kids are black or Hispanic, that means racial and ethnic integration in the schools. Despite all the babble about a postracial America, that has been off the table for a long time.
More than a half-century after the landmark Brown v. Board of Education school desegregation ruling, we are still trying as a country to validate and justify the discredited concept of separate but equal schools — the very idea supposedly overturned by Brown v. Board when it declared, “Separate educational facilities are inherently unequal.”
Schools are no longer legally segregated, but because of residential patterns, housing discrimination, economic disparities and long-held custom, they most emphatically are in reality.
“Ninety-five percent of education reform is about trying to make separate schools for rich and poor work, but there is very little evidence that you can have success when you pack all the low-income students into one particular school,” said Richard Kahlenberg, a senior fellow at the Century Foundation who specializes in education issues.
The current obsession with firing teachers, attacking unions and creating ever more charter schools has done very little to improve the academic outcomes of poor black and Latino students. Nothing has brought about gains on the scale that is needed.
If you really want to improve the education of poor children, you have to get them away from learning environments that are smothered by poverty. This is being done in some places, with impressive results. An important study conducted by the Century Foundation in Montgomery County, Md., showed that low-income students who happened to be enrolled in affluent elementary schools did much better than similarly low-income students in higher-poverty schools in the county.
The study, released last October, found that “over a period of five to seven years, children in public housing who attended the school district’s most advantaged schools (as measured by either subsidized lunch status or the district’s own criteria) far outperformed in math and reading those children in public housing who attended the district’s least-advantaged public schools.”
Studies have shown that it is not the race of the students that is significant, but rather the improved all-around environment of schools with better teachers, fewer classroom disruptions, pupils who are more engaged academically, parents who are more involved, and so on. The poorer students benefit from the more affluent environment. “It’s a much more effective way of closing the achievement gap,” said Mr. Kahlenberg.
About 80 school districts across the country are taking steps to reduce the concentrations of poverty in their schools. But there is no getting away from the fact that if you try to bring about economic integration, you’re also talking about racial and ethnic integration, and that provokes bitter resistance. The election of Barack Obama has not made true integration any more palatable to millions of Americans.
I favor integration for integration’s sake. This society should be far more integrated in almost every way than it is now. But to get around the political obstacles to school integration, districts have tried a number of strategies. Some have established specialized, high-achieving magnet schools in high-poverty neighborhoods, which have had some success in attracting middle class students. Some middle-class schools have been willing to accept transfers of low-income students when those transfers are accompanied by additional resources that benefit all of the students in the schools.
It’s difficult, but there are ways to sidestep the politics. What I think is a shame is that we have to do all of this humiliating dancing around the perennially uncomfortable issue of race.
We pretend that no one’s a racist anymore, but it’s easier to talk about pornography in polite company than racial integration. Everybody’s in favor of helping poor black kids do better in school, but the consensus is that those efforts are best confined to the kids’ own poor black neighborhoods.
Separate but equal. The Supreme Court understood in 1954 that it would never work. But our perpetual bad faith on matters of race keeps us trying.
John McWhorter/TNR/April 22, 2011
I favor integration for integration’s sake,” Bob Herbert wrote in one of his last columns for The New York Times, on what we supposedly need to make poor black students learn more in school.
What poisonous words those actually were, in their way, despite the adulatory flood of letters the column predictably attracted from readers taught something too rarely dismissed as the soft bigotry that it is: that when human beings are black American and poor, we cannot expect them to learn in the same room.
The idea is that what poor black kids need in school is for the kids next to them to be middle class, by which Herbert effectively means white. Herbert, like so many, had it that “years of evidence” show the truth in this idea that when black kids from The Wire meet white kids from Malcolm in the Middle, that takes care of the black-white testing gap.
Herbert’s demonstration piece was a study by the Century Foundation showing that poor black kids from housing projects in Montgomery County, Maryland performed better after spending their elementary school years in better-funded school districts.
But the study itself puts into question Herbert’s implication that the problem is merely one of a heartless America refusing to consider a solution that has become “a political no-no” (i.e. touching a third rail of NIMBY-infused racism). These students came out of sixth grade having made a mere one-third’s difference in the black-white reading gap.
This is the kind of “years of evidence” that makes the argument for integration such a supposedly open-and-shut case?
Meanwhile, what about the “years of evidence” of what these Montgomery County kids were in for on their way to further “integration”? The classic formulation is Claude Steele’s idea that black students surrounded by white ones suffer from “stereotype threat,” haunted by the idea that black kids are less bright and subconsciously hindered in their scholarly performance (this idea is now encapsulated in Steele’s newish book).
As Matthew McKnight warned us recently in these pages, a new variation on this is Gregory Walton and Jeffrey Cohen’s 2007 study of “belonging uncertainty,” according to which:
...In academic and professional settings, members of socially stigmatized groups are more uncertain of the quality of their social bonds, and thus more sensitive to issues of social belonging. We call this state belonging uncertainty, and suggest that it contributes to racial disparities in achievement...
And what about findings such as in this article by Karolyn Tyson, Domini Castellino and William Darity, which despite media reports implying that it “disproved” the idea that black teens often deride nerdy peers as “acting white,” showed that the “acting white” accusation was not only alive and well but most common in integrated schools?
I certainly observed this in middle school. In fifth grade, the black kids in my private school class, reinforced by a few who were of lower-income families, began socializing mostly together as racial identity started to develop. Alone, that was fine, but it came along with a starkly diminished valuation of schoolwork—this was now, although never put in so many words, what it was to be really “black.” I watched two kids’ grades go down as a price for social inclusion in that group.
Work by scholars like Steele and Walton & Cohen is warmly received with vague notions as to how black students ought be made to feel “more comfortable.” However, just what that means is unclear and few seem to genuinely care. The driving impulse would seem to be for good people to show that they “acknowledge” the discomfort in question, out of a general unfocused dismay that America isn’t post-racial.
But what about entertaining the possibility that these studies suggest we reexamine our dutiful recoil at, well, lots of black kids in the same room with books?
It is a moral stain on this nation’s thinking class that a piece like Herbert’s is considered noble wisdom while one such as the article in The Atlantic last year on what Teach for America has learned from twenty years of teaching data goes by like scenery watched from a train.
Teaching poor (black) kids is, we learn in a book based on the findings, “neither mysterious nor magical. It is neither a function of dynamic personality nor dramatic performance.” Rather, what has worked year after year are accessible, sensible and teachable things like routine, constant checking for understanding (and really checking, not just saying “Get it?”), constant revision of lesson plans with a concrete goal in mind, and perseverance. Notably, a master’s degree in education shows no benefit.
There are plenty of inspiring examples of techniques that have improved learning outcomes, none of which have anything to do with integration. As I have written, we know how to teach poor black kids to read, and have for decades. Siegfried Engelmann’s Project Follow Through, and specifically its Direct Instruction technique—a clever phonics-based method of learning sounds, syllables, and rhyming—had slam-dunk results over all other programs tested with it back in the 1970s, on 75,000 children from kindergarten through third grade.
It has continued to, where occasionally allowed to show its mettle. Yet a perusal of Engelmann’s Teaching Needy Kids in Our Backward System shows a callous dismissal of his findings by the Ford Foundation, National Institute of Education and Department of Health, Education and Welfare, on premises so shoddy they recall the endless submission of Rearden Metal to “further tests” in Atlas Shrugged.
For example, Direct Instruction happened to far outscore various other components of Project Follow Through—but because the aggregate score of Direct Instruction and these other components did not surpass good old-fashioned disastrous Title I-funded teaching methods, the National Institute of Education ignored Direct Instruction, refusing to incorporate it into nationally recommended curricula.
By Englemann’s account, “the drowning was a complete success.” The drowning, we must understand, included that of the now sadly defunct idea that being poor and black does not mean that you can’t learn among other people like you. The current impression otherwise would have baffled black community leaders before roughly 1965.
We cheer to hear someone say poor black kids need white classmates to learn, and then cheer again when we read about the same kids suffering from stereotype threat when among exactly those white kids—and then stick a finger down our throats when someone suggests that we work on ways for poor black students to learn together. It won’t do, especially when none of us would be caught dead calling a humble all-black college “segregated.”
We have gone from opposing segregation in the proper meaning of the term to calling it “segregation” when a schoolroom has only black kids in it, and pretending that our ethical response must be the same as it would be to a crumbling all-black one-room schoolhouse in a Mississippi hamlet in 1923 whose leaders refuse to allot it any funds or resources and distrust “colored” kids even learning to read at all.
It is laudable that many of us want to show that we understand what institutional racism is. Yet, doing that does not always help the people undergoing the effects of institutional racism. Sometimes, what actually does is a little bit of, yes, segregation.
Me:
Thanks to Dhurtado's incisive comments, McWhorter's piece took on particular interest for me as did the the linked to op-ed by Herbert.
I think Herbert's focus is racial and ethnic--black and Hispanic--which is inseparable from poverty. I don't think for Herbert race stands as a proxy for underclass economic disadvantage, it is the face and body of it, as in, for his op ed purposes, A is B. After, all the overarching frame of his op ed is integration for the sake of integration. And he starts his op ed with these words:
…One of the most powerful tools for improving the educational achievement of poor black and Hispanic public school students is, regrettably, seldom even considered. It has become a political no-no….
And he says, along the way:
…More than a half-century after the landmark Brown v. Board of Education school desegregation ruling, we are still trying as a country to validate and justify the discredited concept of separate but equal schools — the very idea supposedly overturned by Brown v. Board when it declared, “Separate educational facilities are inherently unequal…
I’m sure that Herbert has concerns for underclass white kids and their underachievement, but that is simply not the focus of of his op ed—which is black and brown educational underperformance, ranging up to complete failure. (This cuts against against Dhurtado’s iteration of Herbert’s argument.)
Herbert makes some good points and McWhorter is either being obtuse or arguing for its own sake in a kind of playful bad faith way to accuse Herbert of what he accuses him: (about Hebert saying he wants "integration for the sake of integration") “What poisonous words those actually were, in their way,”
and:
…Herbert, like so many, had it that “years of evidence” show the truth in this idea that when black kids from The Wire meet white kids from Malcolm in the Middle, that takes care of the black-white testing gap...
The latter McWhorter statement distorts entirely unfairly Herbert’s point and that distortion, once understood, makes mincemeat of McWhorter’s hyperbole of "poisonous words" and "moral stains."
As D Hurtado, and I think Emily P, make abundantly clear, Hebert’s point is that it will be ameliorative for underclass black and brown kids to go to schools that absorb all the benefits of schools situated in more affluent neighborhoods, ranging from better bricks and mortar, better resources, more motivated kids and their parents to generally an educationally enthusiastic neighborhood culture.
One can make some countervailing points, such as “acting white,” but surely they cannot prevail against the sheer commonsense of Herbert’s ameliorative point. Only by McWhorter’s contorting of Herbert’s argument into a straw man—such integrtaed attendance not as amelioration but as a complete solution—does he manage, wrongly, to choke the straw man’s neck.
The problem for me with Herbert’s op ed, and I think Dhurtado makes this clear in his last post, is that he doesn’t deal concretely with the means of functionally implementing what he prescribes. For, ultimately, the problem is as much legal as it is political.
If by political Herbert means political support for bussing and the like spawned by Brown, well, that is a practical non starter and faces strong principled arguments as well as the practical ones. It is a gap in Herbert’s op ed that he does not spell out what measures he advocates to as I say, implement, his prescription.
At the heart of the problem, as I see it, is the difference between de jure and de facto separation. And that distinction lies at the base of the two SCOTUS decisions, 5-4. They voided local school board plans in Louisville and Seattle that had rules and criteria for local school admission based on race in the interest of diversity, when no de jure separation existed. On this issue Herbert says nothing direct and concrete. His op ed suffers for that.
What I find missing from McWhorter’s piece (and from Herbert’s) is the failure to deal with the problem Amy Wax raises in her book RACE, WRONGS, AND REMEDIES: GROUP JUSTICE IN THE 21ST CENTURY, which McWhorter reviewed mostly positively in these pages. Her thesis is that underclass cultural negation, also at the heart of “acting white,” precedes all and any amelioration, and while unaddressed, renders amelioration as so much tinkering at the margins.
I find a massive and enduring contradiction in McWhorter’s previous extolling of this thesis—such as in his review of Wax's book—and his own misplaced advocacy of marginal tinkering in his piece here.
DH:
I'm not sure it is a shortcoming of Herbert's op-ed piece that it does not propose a solution. I think he would be achieving a great deal if he could get the issue of de facto segregation back on the table. He does refer to efforts such as establishing magnate schools in poor neighborhoods, and voluntary transfers of students from poor neighborhoods to schools in middle-class neighborhoods. But he appears to recognize that economic/racial integration is a politically and practically challenging objective. The first step is to recognize that economic disparity among school systems (which disproportionately affects blacks and Hispanics) is a profound problem.
I believe it would go a long way (though certainly not all the way) in addressing this issue if there were not such large disparities in education funding. In other words, I think schools systems should not be funded by local property taxes, but should be funded by state-levied (or even federally-levied) taxes that are then distributed equally among school systems. Unfortunately, in the current climate at least, that would likely meet as much resistance as cross-district busing did decades ago.
Friday, April 22, 2011
Pete Wehner: Reflections on a Good Friday
Contentions
Reflections on Good Friday
Today is Good Friday, the day on which Christians commemorate the crucifixion of Jesus. It is, in many respects, quite an odd event to commemorate: the agonizing death of one whom people of the Christian faith believe to be the Son of God. But this turns out to be consistent with a thread within Christianity that has captured my imagination ever since I embraced it.
In many respects the Christian faith is an inversion of much of what the world celebrates. The last shall be first. Strength is made perfect in weakness. Blessed are the meek, the poor in spirit, and those who are persecuted for the sake of righteousness. Love rather than hate your enemies. Do not store up for yourself treasures on earth. A great persecutor of Jesus, Saul of Tarsus, became his greatest defender (Paul). And then there is Jesus, who was born not to high privilege in Rome but in a manger in Bethlehem, who sweat drops of blood at Gethsemane, and who died on a cross on Golgotha after being disowned by his disciples. He came not to rule but to serve. Among his last words were “My God, My God, why has thou forsaken me?”
And yet this story, which runs against the grain of so much of what the world bestows worth on, has touched the hearts of people for two thousand years—in part, I think, because many of us believe there is truth and hope beyond this world, which is disordered in so many ways; in part because the idea of the perfect sacrificing for the imperfect isn’t offensive but a sublime demonstration of love; and in part because the road that leads through suffering ends in glory.
There is a terrific and never-ending power in the drama of this story, in the incarnation, and in being citizens of a City of God which human beings did not build and cannot destroy and which is everlasting.
So much of what we deal with in life are mere shadows; for many of us this day, and the Sunday that follows, is about reality, about grace and mercy, and about the reconciliation of God and man. That is why it has such a hold on our hearts.
Thursday, April 21, 2011
Deductive and Inductive Reasoning and the Law
OK, start with Kiekegaard: Life is understood backwards but can only be lived forward.
By "understood" is meant to be made to conform to rules of deductive reasoning.
In life, as opposed to court, one decides a course of action based on induction or synthesis, which is greater than the sum of its parts
Deduction, on which hindsight is based, can only consider the parts. That is, a judgment reached by deduction is not the same as a judgment reached by induction
That is, a judgment reached by deduction is not the same as a judgment reached by induction or, judgment by analysis is not the same as judgment by synthesis.
Courts admit only analysis even though actions being judged were based on synthesis
Thus the court is judging the defendant based on a mode of reasoning unavailable to him at the time and thus fundamentally unjust.
Which begs the question: What's the alternative?
A start, though not the solution, would be to replace the adversarial system with the inquisitorial system, which trial lawyers naturally would find anathema, since they would all rather that their judges be potted plants.
As said, this wouldn't end the problem of hindsight justice, but would help mitigate the Jarndyce and Jarndyce form of endless parenthesis which fuel the fee meters.
Btw, if the "reasonable person" standard truly obtained, then we'd not have the likes of the McDonalds coffee case which scandalizes all reasonable people...excepting lawyers, whose courts they regard as hermetic repositories of reason.
Of course, that's just my opinion. I could be wrong.
Me:
I’m not sure I have taken in all or any your nuances based on how compressed your assertions are. But, to start, in my pedestrian, simple minded way let me try to clarify a few things.
Deductive reasoning is from the general to A particular. “It’s raining. Therefore, the streets will be wet.” Inductive reasoning is from A particular to the general. “The streets are all wet. Therefore it must have rained.”
You are saying we live by going forward based on our more general conclusions drawn from past particular observations. That is to say, you say, as we go forward our expectations will, we think, conform to what we have observed. Induction, you say, plays no part in this. You say, I think, inductive synthesis is to judgment as deductive analysis is to judgment.
I don’t understand this at all, if I have your reasoning right. Prosaically, if it’s raining, I wear my boots because the streets will be wet. If it’s overcast, I take my umbrella because it’s likely to rain. We “live forward”/go forward based both on inductive and deductive reasoning. They’re different kinds of reasoning but are without a difference for going forward, I argue.
It’s inaccurate to say “Courts admit only analysis even though actions being judged were based on synthesis” if I understand this. Courts admit evidence. Courts find facts based on the evidence, which is essentially a process of drawing conclusory inferences.
There may be a factual issue whether at x time the roads were wet. There may be evidence that it rained just before x time. There may be an issue whether it was raining at x time. There may be evidence that just after x time the roads were wet. As well experts opine on all kinds of factual questions. That’s called expert evidence. Sometimes their reasoning to their opinions is inductive; sometimes it’s deductive. So your assertion does not hold.
Neither does your assertion nor your question, accordingly, if I understand them: “Thus the court is judging the defendant based on a mode of reasoning unavailable to him at the time and thus fundamentally unjust. Which begs the question: What's the alternative?”
There are a lot of contending pros and cons about the adversarial system against the inquisitorial systems of administering justice, and I need to know more about the latter, but they are not helped by misconceivedly calling judges “potted plants,” referring to “hindsight justice” and “hermetic repositories of reason,” each of which mischaracterizations can be easily shown as such.
Finally, you say, “Of course, that's just my opinion. I could be wrong.” Bingo!
basman, intriguing proposal. It wouldn't work for medicine, though. I have sat medical professional examinations in two countries, the USA and Australia, and I have yet to encounter a test that I would trust to certify competency in the full range of situations required. I suppose there's no absolute reason that you couldn't permit people to "test out" of the first two, preclinical years of med school, but after that medical education is all on-the-job training, essentially an apprenticeship, and it is hard to imagine an alternative, medically ethical pathway to acquiring the practical skills that such training instills.
Me:
aaronw I take completely your point about the inapplicability of my mdoest proposal to medicine. But law school--save for mooting and courses and programs geared to practical experience, like a supervised stint in a law clinic or some such--doesn't do so much so uniquely that, in my view, alternatives to it can't accomplish. In Canada, in every province, somewhere along the way, a prospective lawyer must article--apprentice--for 6 months to a year. Plenty of full on experience there. When I mention my proposal to my lawyer friends, they look at me like I'm crazed. But when I think how much law school costs, how those expenses entrench class differences as they are so prohibitively expensive for so many and how law schools vying for academic respectability claim they are providing an education in a particular discipline as much as, or more than, they are professional training schools, I still await an argument that puts my proposal decisively in its place.
one p.s.: When you say " ... suppose there's no absolute reason that you couldn't permit people to "test out" of the first two, preclinical years of med school... " you're getting into the space my proposal occupies.
Me:Our Canadian Bar Exams are a true test of a jurisdiction's black letter law in major areas. When I did them it was a 6 month program moving from one subject to the next, replete with big, weighty tomes and a stiff exam for each segment studied. After that I had to do a year's articling, and that was all after the traditional 3 years of law school. But their whys and wherefores as such are not really my point--which is alternatives to traditional (oppressive) law school strangle hold on certification. The possibilities for designing the appropriate lead up to finally vouching professional competence are wide open and really interesting in their own right.
That's much better than what we have in the US, Basman. That said, it doesn't sound as though many people could successfully complete such an exam without some type of specialized formal education (i.e., law school). But I have always believed that the first year of US-style law school is sufficient to prepare one to be a legal apprentice. (Associates in large law firms are nothing more than absurdly well-paid apprentices.) On the other hand, I immensely enjoyed my second and third years of law school. In focusing on whether a law school prepares a person to practice law or ensures employment as a lawyer, we are overlooking the intrinsic value of a legal education, and the contributions to the administration of justice. Taking a seminar in critical legal theory or writing a law review article may not have done anything to make me a better lawyer, but those experiences were personally enriching and enabled me, I would like to think, to make some contribution to legal scholarship.
That's much better than what we have in the US, Basman. That said, it doesn't sound as though many people could successfully complete such an exam without some type of specialized formal education (i.e., law school). But I have always believed that the first year of US-style law school is sufficient to prepare one to be a legal apprentice. (Associates in large law firms are nothing more than absurdly well-paid apprentices.) On the other hand, I immensely enjoyed my second and third years of law school. In focusing on whether a law school prepares a person to practice law or ensures employment as a lawyer, we are overlooking the intrinsic value of a legal education, and the contibutions to legal thought and the administration of justice that are made in the cauldron of law school. Taking a seminar in critical legal theory or writing a law review article may not have done anything to make me a better lawyer, but those experiences were personally enriching and enabled me, I would like to think, to make some contribution to legal scholarship.
Dhurtado
Your law school experience was different than mine. First term semester, 300 of us sacredy cats wondered whether we’d flunk—and all of us were good students. After first semester exams, and seeing we knew we could pass, apart from some pockets of genuine interest—jurisprudence, administration of criminal justice, joint essay on equality before law in lieu of a course and civil liberties—each semester was for me at least an increasingly frustrating and irritating jumping over hurdles to write papers, sit exams, to get past graduation. (I wrote one paper that got published in a legal journal when I went back for an LLM. It was on Continuing Discovery under Ontario procedures. I never finished my LLM—I lost the motivation to write my thesis after I finished my course work.)
But apart from these pleasant/unpleasant reminiscences I emphatically maintain my point that law school—for however some may have found/find it intellectually enriching--need not be the exclusive means whereby an aspirant can equip him/herself to take whatever battery of tests may be in order, together with articling/apprenticeship—in England called student pupilage— to ensure presumptive legal competency.
Disestablishing JDs as a necessary step towards licensing lawyers, as I before noted, amongst other things, breaks the self sustaining monopoly law schools hold, breaks the ongoing class differences that facilitate, by and large, who can afford to go, allows people to gain proficiency without incurring the near to prohibitive costs, harnesses the internet in allowing people flexibility in how they advance in their own fashioned legal education and in opening up a host of innovative means catering to those who would do that fashioning.