Thursday, September 30, 2010

Review of The Social Network by Lawrence Lessig: And So Much More

Sorkin vs. Zuckerberg

‘The Social Network’ is wonderful entertainment, but its message is actually kind of evil.

Lawrence Lessig October 1, 2010//TNR

In 2004, a Harvard undergraduate got an idea (yes, that is ambiguous) for a new kind of social network. Here’s the important point: He built it. He had a bunch of extremely clever clues for opening up a social space that every kid (anyone younger than I am) would love. He architected that social space around the social life of the kids he knew. And he worked ferociously hard to make sure the system was stable and functioning at all times. The undergraduate then spread it to other schools, then other communities, and now to anyone. Today, with more than 500,000,000 users, it is one of the fastest growing networks in the history of man. That undergraduate is now a billionaire, multiple times over. He is the youngest billionaire in the world.

In 2009, Aaron Sorkin (“Sports Night,” “The West Wing”) got (yes, the same word) the idea to write a script for a movie about this new social network. Here’s the important point: He made it. As with every one of his extraordinary works, Sorkin crafted dialogue for an as-yet-not-evolved species of humans—ordinary people, here students, who talk perpetually with the wit and brilliance of George Bernard Shaw or Bertrand Russell. (I’m a Harvard professor. Trust me: The students don’t speak this language.)

With that script, and with a massive hand from the film’s director, David Fincher, he helped steer an intelligent, beautiful, and compelling film through to completion. You will see this movie, and you should. As a film, visually and rhythmically, and as a story, dramatically, the work earns its place in the history of the field.

But as a story about Facebook, it is deeply, deeply flawed. As I watched the film, and considered what it missed, it struck me that there was more than a hint of self-congratulatory contempt in the motives behind how this story was told. Imagine a jester from King George III’s court, charged in 1790 with writing a comedy about the new American Republic. That comedy would show the new Republic through the eyes of the old. It would dress up the story with familiar figures—an aristocracy, or a wannabe aristocracy, with grand estates, but none remotely as grand as in England. The message would be, “Fear not, there’s no reason to go. The new world is silly at best, deeply degenerate, at worst.”

Not every account of a new world suffers like this. Alexis de Tocqueville showed the old world there was more here than there. But Sorkin is no Tocqueville. Indeed, he simply hasn’t a clue to the real secret sauce in the story he is trying to tell. And the ramifications of this misunderstanding go well beyond the multiplex.

Two lawsuits provide the frame for The Social Network. One was brought by Cameron and Tyler Winklevoss, twins at Harvard who thought they had hired Zuckerberg to build for them what Facebook would become. The other was brought by Eduardo Saverin, Zuckerberg’s “one friend” and partner, and Facebook’s initial CFO, who was eventually pushed out of the company by Silicon Valley venture capitalists.

These cases function as a kind of Greek chorus, setting the standards of right, throughout the film. It is against the high ideals they represent that everything else gets judged. And indeed, the lawyers are the only truly respectable or honorable characters in the film. When they’re ridiculed or insulted by Zuckerberg, their responses are more mature, and better, than Zuckerberg’s. (If you remember the scene in “The Wire” where Omar uses his wit to cut the lawyer to bits, that’s not this film.) The lawyers here rise above the pokes, regardless of the brilliance in Zuckerberg’s charge. This is kindergarten. They are the teachers. We’re all meant to share a knowing wink, or smirk, as we watch the silliness of children at play.

In Sorkin’s world—which is to say Hollywood, where lawyers attempt to control every last scrap of culture—this framing makes sense. But as I watched this film, as a law professor, and someone who has tried as best I can to understand the new world now living in Silicon Valley, the only people that I felt embarrassed for were the lawyers.

The total and absolute absurdity of the world where the engines of a federal lawsuit get cranked up to adjudicate the hurt feelings (because “our idea was stolen!”) of entitled Harvard undergraduates is completely missed by Sorkin. We can’t know enough from the film to know whether there was actually any substantial legal claim here. Sorkin has been upfront about the fact that there are fabrications aplenty lacing the story.

But from the story as told, we certainly know enough to know that any legal system that would allow these kids to extort $65 million from the most successful business this century should be ashamed of itself. Did Zuckerberg breach his contract? Maybe, for which the damages are more like $650.00, not $65 million. Did he steal a trade secret. Absolutely not. Did he steal any other “property”? Absolutely not—the code for Facebook was his, and the “idea” of a social network is not a patent. It wasn’t justice that gave the twins $65 million; it was the fear of a random and inefficient system of law. That system is a tax on innovation and creativity. That tax is the real villain here, not the innovator it burdened.

The case for Zuckerberg’s former partner is stronger, and more sensible and sad. But here again, the villains are not even named. Sorkin makes the autodidact Sean Parker, co-founder of Napster, the evil one. (No copyright-industry bad blood there). I know Parker. This is not him.

The nastiest people in this story (at least if Sorkin tells this part accurately) were the Facebook lawyers who show up in poorly fitting suits and let Saverin believe that they were in this, as in everything else they had done, representing Saverin as well. If that’s what actually happened, it was plainly unethical. No doubt, Saverin was stupid to trust them, but the absurdity here is a world where it is stupid to trust members of an elite and regulated profession. Again, an absurdity one could well miss in this film between all the cocaine and practically naked twenty-somethings.

But the most frustrating bit of The Social Network is not its obliviousness to the silliness of modern American law. It is its failure to even mention the real magic behind the Facebook story. In interviews given after making the film, Sorkin boasts about his ignorance of the Internet.

That ignorance shows. This is like a film about the atomic bomb which never even introduces the idea that an explosion produced through atomic fission is importantly different from an explosion produced by dynamite. Instead, we’re just shown a big explosion ($25 billion in market capitalization—that’s a lot of dynamite!) and expected to grok (the word us geek-wannabes use to show you we know of what we speak) the world of difference this innovation in bombs entails.

What is important in Zuckerberg’s story is not that he’s a boy genius. He plainly is, but many are. It’s not that he’s a socially clumsy (relative to the Harvard elite) boy genius. Every one of them is. And it’s not that he invented an amazing product through hard work and insight that millions love. The history of American entrepreneurism is just that history, told with different technologies at different times and places.

Instead, what’s important here is that Zuckerberg’s genius could be embraced by half-a-billion people within six years of its first being launched, without (and here is the critical bit) asking permission of anyone. The real story is not the invention. It is the platform that makes the invention sing. Zuckerberg didn’t invent that platform. He was a hacker (a term of praise) who built for it. And as much as Zuckerberg deserves endless respect from every decent soul for his success, the real hero in this story doesn’t even get a credit. It’s something Sorkin doesn’t even notice.

For comparison’s sake, consider another pair of Massachusetts entrepreneurs, Tom First and Tom Scott. After graduating from Brown in 1989, they started a delivery service to boats on Nantucket Sound. During their first winter, they invented a juice drink. People liked their juice. Slowly, it dawned on First and Scott that maybe there was a business here. Nantucket Nectars was born. The two Toms started the long slog of getting distribution. Ocean Spray bought the company. It later sold the business to Cadbury Schweppes.

At each step after the first, along the way to giving their customers what they wanted, the two Toms had to ask permission from someone. They needed permission from a manufacturer to get into his plant. Permission from a distributor to get into her network. And permission from stores to get before the customer. Each step between the idea and the customer was a slog. They made the slog, and succeeded. But many try to make that slog and fail. Sometimes for good reasons. Sometimes not.

Zuckerberg faced no such barrier. For less than $1,000, he could get his idea onto the Internet. He needed no permission from the network provider. He needed no clearance from Harvard to offer it to Harvard students. Neither with Yale, or Princeton, or Stanford. Nor with every other community he invited in. Because the platform of the Internet is open and free, or in the language of the day, because it is a “neutral network,” a billion Mark Zuckerbergs have the opportunity to invent for the platform.

And though there are crucial partners who are essential to bring the product to market, the cost of proving viability on this platform has dropped dramatically. You don’t even have to possess Zuckerberg’s technical genius to develop your own idea for the Internet today. Websites across the developing world deliver high quality coding to complement the very best ideas from anywhere. This is a platform that has made democratic innovation possible—and it was on the Facebook platform resting on that Internet platform that another Facebook co-founder, Chris Hughes, organized the most important digital movement for Obama, and that the film’s petty villain, Sean Parker, organized Causes, one of the most important tools to support nonprofit social missions.

The tragedy—small in the scale of things, no doubt—of this film is that practically everyone watching it will miss this point. Practically everyone walking out will think they understand genius on the Internet. But almost none will have seen the real genius here. And that is tragedy because just at the moment when we celebrate the product of these two wonders—Zuckerberg and the Internet—working together, policymakers are conspiring ferociously with old world powers to remove the conditions for this success.

As “network neutrality” gets bargained away—to add insult to injury, by an administration that was elected with the promise to defend it—the opportunities for the Zuckerbergs of tomorrow will shrink. And as they do, we will return more to the world where success depends upon permission. And privilege. And insiders. And where fewer turn their souls to inventing the next great idea.

I had always hoped (naively, no doubt) that this point would be obvious to the creators of film. No field of innovation is more burdened by the judgments of idiots in the middle than film. Scores of directors have watched in horror as their creativity gets maimed by suits-carrying-focus-groups.

I had thought that if only these creators would let themselves understand the ethic of Internet creativity—where the creator gets to speak directly to an audience, where an audience is brought on stage, and talks back—they would get it. And if they did, that there might actually be a chance for this understanding to be shown in one of the only ways this culture understands anymore—through film. Indeed, as I walked into this film unprimed by early reviews, I had hoped, “West Wing” fan-boy that I am, that of all the storytellers in Hollywood, Sorkin was most likely to get it.

He didn’t. His film doesn’t show it. What it shows is worth watching. But what it doesn’t show is an understanding of the most important social and economic platform for innovation in our history.

Zuckerberg is a rightful hero of our time. I want my kids to admire him. To his credit, Sorkin gives him the only lines of true insight in the film: In response to the twins’ lawsuit, he asks, does “a guy who makes a really good chair owe money to anyone who ever made a chair?” And to his partner who signed away his ownership in Facebook: “You’re gonna blame me because you were the business head of the company and you made a bad business deal with your own company?”

Friends who know Zuckerberg say such insight is common. No doubt his handlers are panicked that the film will tarnish the brand. He should listen less to these handlers. As I looked around at the packed theater of teens and twenty-somethings, there was no doubt who was in the right, however geeky and clumsy and sad. That generation will judge this new world. If, that is, we allow that new world to continue to flourish.

A Seeming Flaw in Justice Himel's Striking Down of Some Prostitution Laws

I have not yet read her Honour's tome of a decision but I have a preliminary thought of what might be an abiding fundamental error, having read some accounts of her ruling. My thought is this: Her Honour is constitutionally protecting criminality. Time and analysis will tell.

Wednesday, September 29, 2010

Good Decision? Striking Down Prostitution Laws: Natasha Falle Doesn't Thinks So

Former prostitute 'shocked' by Ont. court decision

A former sex-trade worker who now helps prostitutes trying to leave the trade, says an Ontario court's decision to strike down Canada's laws surrounding prostitution was a terrible move.

Natasha Falle, who runs StreetLight, a non-profit organization that provides support services for sex workers, and works with the Toronto Police's Sex Crimes Unit, says she was "shocked" by Tuesday's court decision to strike down three provisions of the Criminal Code surrounding prostitution.


"It was very disappointing for me that a judge would determine that this is the best solution for protecting people in the sex trade industry," Falle told CTV's Canada AM Wednesday morning.

The laws prohibited communicating for the purposes of prostitution, keeping a common bawdy house, and living on the avails of the trade.
Justice Susan Himel wrote her in 131-page decision that the laws, "individually and together, force prostitutes to choose between their liberty interest and their right to security of the person as protected under the Canadian Charter of Rights and Freedoms."


Those who wanted the laws to be quashed say they forced hookers to work the streets, instead of in the safety of their homes. But Falle says decriminalizing all aspects of prostitution is not the solution.

"I don't think Canadians understand what this means. This means, if this decision is to carry through… your next door neighbours can run a brothel right beside you. Your children could be exposed to condoms left on their driveway, johns propositioning them," she said.

Falle also worries that by normalizing prostitution, it gives children the idea that prostitution is a good and acceptable way to make a living.

"Thirteen to 16 years old is the average age that someone enters prostitution. So when do we start referring to them as sex workers?" she said.

Ron Marzel, one of the lawyers for the women who challenged the laws, says he was "thrilled" with Tuesday's decision which he says will protect for sex workers, who should have the right to practice their profession safely.

"Certainly, we need social programs to make sure that children in the profession; however,the reality is there are consenting adults who want to go into this profession," he said on Canada AM.

Falle grew incensed at this, insisting that 97 per cent of prostitutes aren't in the sex trade by choice. She says the voices of the overwhelming majority of women who want to get out of prostitution are being drowned out by a vocal few.

"The voices we are hearing right now are the minority voices and they are only strong because of circumstances in Ontario. All the other provinces are not on board with this."

Tuesday's decision is subject to a 30-day stay during which the laws remain in place; the federal government can seek an extension of that period.

Ontario Premier Dalton McGuinty said Wednesday that he would be surprised if the federal government did not appeal the ruling.

McGuinty says the ruling proposes some profound changes to laws that have been on the books for decades, and Ontario "looks forward" to supporting the federal Conservatives in the expected appeal.

Justice Minister Rob Nicholson signalled last night that the Conservatives are seriously considering an appeal. He said Ottawa would "fight to ensure that the criminal law continues to address the significant harms that flow from prostitution to both communities and the prostitutes themselves."

A (slightly edited by me) email from the Bench:

Thanks for the blog Itzik. I happen to agree with you; we apply s. 25 "to at least minimum standards of law." There is nothing in my view that gives us carte blanche to do anything we decide provided it meets the test of "good conscience." Do you remember old adage about "equity being equal to the length of the Chancellor's foot..." (or something like that) from law school?

Tuesday, September 28, 2010

A Legal Case for a Change: Small Claims Court: Pleadings: Set Off: s.25 "Courts of Justice Act" and Ambit of Small Claims Court Judge's Discretion

Hydro One Networks Inc. v. Yakeley, 2010 ONSC 4770 (CanLII)

http://www.canlii.org/en/on/onsc/doc/2010/2010onsc4770/2010onsc4770.html


The question is whether the trial judge erred in allowing the defence of set off when it had not been explicitly pleaded.

This was a Small Claims Court action in which the defendant/respondent had been self represented. He had counsel for the appeal.

In part of his Defence he pleaded:

...4. A number of situations were created as a result of this last minute change in pricing and conditions. First we are already committed to the cost of preparing the house for the move, we were already committed to the cost of moving the house with the house movers since the house was already loaded for the move and sitting at the end of the driveway and we had a $25,000 deposit with the purchasers of our property to have the house removed from the property by August 31, 2001 which we were not prepared to extend without forfeit of our deposit. Since there was no time to negotiate with property owners under the high-voltage wires or to build a road the only option to us was to proceed with the move was to cut the top of the entire roof off to lower the overall height of the house for the move.

5. … We were left with extensive cost to rebuild the roof of the house to its pre-existing condition not to mention costs to repair damage to the home from the elements. [emphasis added...

And in dialogue with the trial judge, the following was said:

...[8] In the Small Claims Court trial the deputy Small Claims Court judge had a dialogue with Mr. Yakeley during his examination-in-chief. As set out at page 55 of the transcript:

THE COURT: It is perhaps a legalese question but I know that it was raised at the Settlement Conference held March 17, 2008. In essence what you’re seeking, in part of your defence, is a set off but there is no claim by you as against Hydro One which would have been referred to as the defendant’s claim? I know that it was discussed because there was a note in that memorandum about it and I just want it on the record. I mean, I do not care about the discussions per se but there is no such claim advanced by you?

THE WITNESS [MR. YAKELEY]: That’s right.

THE COURT: Just so that I am clear, to repeat the obvious, your position is, but for some caution from the plaintiff, and particularly Shane Deugo, [Hydro One witness] to you in your initial meeting that the height might have been changed because of, depending on temperature and depending on time of year, you would have done differently?

THE WITNESS: That’s right.

THE COURT: And because of the time and shortness of notice and all the commitments you had made it, ended up costing you $80,000 more?

THE WITNESS: That’s right.

THE COURT: Do you want to say anything more?

THE WITNESS: I considered making a counter-claim to Hydro to offset this argument but I decided not – that I wasn’t going to put – the resources and the effort into the –...

Set off is available as a defence under s.111(1) of the Courts of Justice Act Its s. 25 says, "The Small Claims Court shall hear and determine in a summary way al questions of law and fact and may make such order as is considered just and agreeable to good conscience."

Case law lays it down that Small Claims Court proceedings are informal with latitude for its participants especially since many are self represented. The pleadings strictures of the Superior Court are therefore unworkable in the Small Claims Court. Functionally, facts are presented by the parties and the judge determines the legal issues arising from those facts.

Here, for the reviewing judge, the issue of set off was sufficiently raised. The plaintiff, a big corporation with in house counsel, had to know the defence being put to it simply based on the pleadings and the exchange between the defendant and the trial judge. Plaintiff, represented at trial, did not cross examine or ask for an adjournment. As the reviewing judge concluded:

...[19] There is no magic in the requirement to use the words “set off” in pleadings in Small Claims Court. To require a strict adherence to the rules of pleadings would be contrary to the role of the Small Claims Court in the administration of justice. In my view there was no error in law in applying set off principles based on the evidence which the deputy judge considered and accepted...

Me:

This decision seems patently right. My concern is the citing of authority for the following reasoning in the course of the reviewing judge's own reasons:

...16] With respect to the formalities required in pleadings in Small Claims Court Heeney, J. stated in 936464 Ontario Ltd. v. Mungo Bear Ltd. reflex, (2003) 74 O.R. (3d) 45 at para. 45:...

More important though is the fact that the case at bar was litigated in the Small Claims Court. The higher standards of pleading in the Superior Court are simply unworkable in a Small Claims Court, where litigants are routinely unrepresented, and where legal concepts such as the many varieties of cause of action are completely foreign to the parties. Essentially, the litigants present a set of facts to the Deputy Judge, it is left to the deputy judge to determine the legal issues that emerge from those facts and bring his or her legal experience to bear in resolving those issues...

I think this dicta accords more discretion in the Small Claims Court judge than is warranted and is wrong in principle even given the broad language of s. 25 of the Courts of Justice Act. I have never briefed s. 25 but have an initial impression that where, for an instance, the law requires specific pleading, such as with limitation periods, the failure to do so might well be fatal, and that "good conscience" in s.25 means "good conscience according to, at least, minimum standards of law".


I imagine that in the end, at least for the sufficiency of pleadings, functionality will be the final analysis: did the other party fairly have notice of the claim or defence being presented? It is too broad to say, as seems suggested by Justice Heeney, that we give the Small Claims Court judge the facts, and then let him or her do justice, "determine the legal issues", according to his or her legal experience.

Fortunately, this case need not be read as embracing that overbroad statement, which can be read as obiter dicta. Functionally, by what was pleaded, let alone the defendant's colloquy with the trial judge, there's no way that Hydro 1 didn't know it was a facing a defence based on set off, and equitable set off at that. And, I suggest, in that, on this point, lays the ratio of this case.

Abbas Smiling on His Way Out from the Talks?

Abbas Looks For A Way to End Peace Talks--With A Smile On His Face

Posted: 27 Sep 2010//Rubin Reports

By Barry Rubin

It never ceases to amaze me how hysteria and mystification so clouds peoples' minds over the Arab-Israeli (or Israeli-Palestinian) conflict. Consider this simple point of logic which you may not see explained anywhere else. And see the point at the end about President Barack Obama.

Palestinian Authority (PA) leader Mahmoud Abbas claims that he can't negotiate with Israel if Israel once again begins to construct buildings on existing settlements after a nine-month freeze on construction.

Let's evaluate this statement.

First, Abbas knew that the freeze would last nine months and might not be renewed when it ended in September. If he wanted to give Israel an incentive to continue it--by showing that this Israeli concession, brought progress toward peace and some advantage for Israel--Abbas could have acted. Instead, he stalled until the very last moment. For weeks, the United States begged and pressed him to return to talks.

Second, if the Palestinians negotiate a two-state solution they will get--worst-case analysis--almost all of the West Bank. There will be no Jewish settlements in that territory. The settlements will be gone. All the roads and buildings Israel built (unless dismantled in the days before the agreement's implementation) will go to the Palestinians.

So if Abbas and the Palestinians are horrified by Israeli construction, wouldn't it have made sense for them to negotiate real fast? But, on the contrary, they stretch out the process year after year after year, continually finding excuses for doing so.

Remember that the PA refused to negotiate for well over a year after January 2009. All that time Israel was building on settlements. Then for the last nine months when Israel wasn't building in the West Bank, the PA still refused to negotiate.

Let's now provide a full timeline:

Phase One: From 1992 until late in 2000, the PLO, and later the PA negotiated with Israel at a time when there were no limits on construction within settlements. They were, however, in no hurry to make a deal and, in fact, killed the talks in 2000. Incidentally, Israel made a huge concession from its previous positions to begin the process in 1993: No new settlements or territorial expansion of existing ones. It keept that commitment. The PLO and PA also made some "concessions": They would fight against terrorism. They didn't. They never raised as a bargaining point the idea of a freeze on construction in existing settlements.

Phase Two: Then from 2000 to 2009--a decade--the PA refused direct any sustained peace negotiations at a time when there were no limits on construction within settlements. They never raised as a bargaining point the idea that they would end the violence (2000-2005) or that they would negotiate in exchange for a freeze on construction in existing settlements. That was President Obama's idea in mid-2009 and they rejected it.

Phase Three: After Israel did freeze construction, the PA wasted nine months--knowing the clock was ticking on the temporary freeze--without making any moves to accelerate, or even hold, negotiations.

Thus, the PA has wasted almost 20 years, during which thousands of buildings have been added to Israeli settlements.

Here is a fundamental flaw in the assumption that the Palestinians are desperately eager to get a state and end their suffering. They don't seem so eager at all. Why? Because the Palestinian leadership has long argued that it is more important to conquer all of Israel--or reach an agreement that didn't get in the way of pursuing that goal--than to make compromises and get a two-state solution.

What does the PA want? An independent Palestinian state given as a gift by the world rather than requiring mutual compromise with Israel. That doesn't require negotiations, it requires a lack of negotiations.

If Abbas walks away from talks he will not be crying that creation of a Palestinian state has been delayed. On the contrary, he will be smiling that he escaped from what most PA leaders--though not Prime Minister Salam Fayad--view as the peace trap.

Incidentally, note that when President Barack Obama made his upbeat interpretation of the "peace process" one of the main themes in his September 23 UN speech, he was totally aware that the negotiations were probably on the verge of collapse due to the termination of the freeze. It could be argued that by playing up the issue he was trying to encourage everyone to keep going, but how can you stake your diplomatic reputation on a card that is about to bring down a house of cards? That's somewhere between being irresponsible and suicidal.

But perhaps Obama has reason to think he can get away with such things. After all, people have forgotten what happened with his speech to the UN last year! He predicted high-level, intensive Israel-Palestinian talks within three months and it took him a year to get low-level, fragile, limited talks. His policy was a total failure yet try to find anyone in the mass media reporting that point.

Monday, September 27, 2010

Breyer vs Scalia?

Pragmatism Strikes Back

Jeffrey Rosen//TNR

Justice Stephen Breyer
Making Our Democracy Work: A Judge's View
by Stephen Breyer


Justice Stephen Breyer’s new book arrives at a time when liberals are still hungry for a constitutional vision. In a series of polls conducted by Quinnipiac University between 2003 and 2008, 54 percent of the respondents believe that “the Supreme Court should consider changing times and current realities in applying the principles of the Constitution,” as opposed to 44 percent who believe that the Court “should only consider the original intentions of the authors of the Constitution.”

And yet, unlike the defenders of “original understanding”, the supporters of “the living Constitution” have not been able to agree on a clear interpretive approach. Justice Breyer is the only sitting Supreme Court justice who has tried to outline, in a systematic way, an alternative to originalism.

In its scope and historical detail, Making Democracy Work advances Breyer’s vision beyond the broad details that he sketched in his previous book, Active Liberty, which argued that judges should interpret the Constitution in ways that promote democratic political participation rather than short-circuiting it.

He sets out to answer a basic question about democratic legitimacy: how can judges earn the public’s confidence, so that Americans will follow even those opinions with which they disagree?

And he offers two principles, drawn from history and experience. The first is that the Court should reject rigid originalism, instead viewing “the Constitution as containing unwavering values that must be applied flexibly to ever-changing circumstances”; and the second is that the Court, when interpreting the Constitution, should “take account of the role of other governmental institutions and the relationships among them.”

Breyer is a staunch pragmatist, and he emphasizes that in building practical working relationships with the President and Congress, the Court should recognize that its decisions have real-world consequences, and take those consequences into account.

Breyer’s book begins by exploring four historical controversies in which the public accepted Supreme Court decisions that substantial numbers of people thought were wrong:

Marbury v. Madison, in 1803, in which Chief Justice Marshall established the principle of judicial review and avoided a conflict with President Jefferson;

Worcester v. Georgia
, in 1832, in which Marshall protected the rights of the Cherokee Indians over the objections of President Jackson;

Dred Scott v. Sandford, in 1857, in which Chief Justice Taney struck down the Missouri Compromise and mobilized opposition led by Abraham Lincoln;

and Cooper v. Aaron, in 1958, in which the Court led by Chief Justice Earl Warren unanimously ordered Governor Orval Faubus to integrate the schools in Little Rock.

Breyer’s historical narratives are vivid and full of surprising details. He describes highly technical opinions—such as Justice Benjamin Curtis’s dissent in the Dred Scott case—in lively and accessible ways, and documents how extensively Lincoln relied on Curtis’s dissent. He also unearths relevant details about the political and legal context of the four cases, such as the misleading report that the military filed to justify the Japanese internment, and the Solicitor General’s refusal to cite the unreliable report explicitly.

Although the four cases are different, Breyer says, they illustrate that the public has “developed a habit of following the Court’s constitutional interpretations, even those with which it strongly disagrees,” and that this habit has supported the Court’s increasingly confident assertion of “judicial supremacy”—namely, the principle that the justices are supreme over the president and Congress in the interpretation of the Constitution.

But do these historical examples really support a view of heroic judges fearlessly and successfully protecting unpopular minorities in the face of “strong” opposition from the political branches and the public?

As Breyer notes, the principle of judicial supremacy that the Court asserted in Cooper—namely, that the president and Congress are obligated to follow the Court’s interpretation of the Constitution not only in a particular case but in all similar cases—went far beyond Marshall’s more modest claim in Marbury that all three branches of government are “bound by” the Constitution and entitled to interpret it on their own.

Moreover, Breyer’s examples suggest that the Court is generally ineffective in sustaining constitutional interpretations that are intensely contested by the President and Congress: in the two cases where Presidents or national majorities strongly rejected the Court’s interpretation—the Cherokee Indians case and Dred Scott—the Court was ultimately unable to enforce its constitutional vision. It was only where the President and Congress supported, or at least tolerated, the Court’s decisions that the justices were able to prevail over opposition from a minority of the country.

More generally, Breyer does not discuss the extensive literature by political scientists and others suggesting that the real reason that the Court has generally maintained its legitimacy over time is that it rarely challenges public opinion in a sustained way, even in cases that the public does not follow closely. Breyer’s focus on the need for the Court to persuade the public to accept unpopular opinions may underestimate the degree to which the Court is highly constrained by public opinion; seldom issues opinions that are strongly unpopular with national majorities; and gets into trouble on the few occasions when it sticks its neck out too far.

In the latter half of his book, however, Breyer persuasively acknowledges the Court’s inability to act unilaterally by arguing that it should consider—and often defer to—the institutional views of the President and Congress in deciding cases. Here Breyer shows an appealing humility, which contrasts with the grandiosity of originalist judges who believe they have a unique ability to discern the one and true meaning of the Constitution, and to put the other branches in their place.

This hermeneutical modesty, more than any other quality, is what the most successful justices, notably John Marshall, have deployed to shore up the Court’s fragile legitimacy—never picking fights with the other branches that they cannot win. As Marshall remarked, “I am not fond of butting against a wall in sport.”

Breyer’s institutional modesty suggests that he, far more than the conservative originalists, has inherited the mantle of judicial restraint from Brandeis and Holmes. But like all approaches, institutional modesty has its limitations. One criticism of Breyer’s moderation in interepretive ambition is that he can be, at times, too accommodating of the other branches, too willing to enmesh the Court in constitutional compromises that it would do better to avoid.

In his discussion of Korematsu v. U.S., from 1944, the case that upheld the exclusion of Japanese Americans from their West Coast homes during World War Two, Breyer makes clear that he agrees with Justice Frank Murphy’s dissenting opinion. But in a surprising passage, Breyer goes on to suggest that the Court could have converged on a pragmatic compromise that would have upheld the internment of some Japanese Americans by crafting appropriate safeguards and procedures.

The Court might “have found a workable way to hold the president constitutionally accountable,” he writes. “Perhaps it could have developed a sliding scale in respect to the length of detention and the intensity of its examination of the circumstances. Perhaps it could have insisted that the government increase screening efforts the longer an individual is held in detention. Perhaps it could have required the government to have had in place from the beginning a plan for future screening … As it was the Court majority understood the danger of excessive judicial interference in military affairs, but it did not satisfactorily address the problem of insufficient judicial involvement.”

But if the Court’s decision “hurt the interned Japanese by validating their interment,” as Breyer suggests, wouldn’t it have hurt itself by validating the interment of a smaller group of the detainees with judicially crafted safeguards? What sort of pragmatism is this? As it turned out, on the same day that it struck down Korematu’s exclusion, which had been endorsed by the President and Congress, the Court refused to endorse the continued detention of another American citizen, Mitsuye Endo, which Roosevelt and Congress had never endorsed. Wouldn’t the Court have dug itself in deeper by saying that Endo, or other Japanese Americans suspected of disloyalty, could, in fact, be detained as long as the government followed appropriate procedures?

Similar questions may be raised about Breyer’s positions in four related Guantanamo cases involving the President’s power to detain suspected enemy combatants in the war on terror. In the Hamdi case, in 2004, Breyer joined Justice Sandra Day O’Connor’s plurality opinion holding that the Constitution permitted the Bush administration to classify an American citizen as an enemy combatant, despite the lack of Congressional authorization. But the opinion added that judges should create procedural safeguards, such as access to lawyers. The obvious objection to the Hamdi opinion was that it failed to respect Congressional prerogatives: why should judges allow President Bush to act unilaterally, inventing judicial oversight mechanisms to save him from his worst instincts, rather than forcing him to ask for Congressional support?

In the Hamdan case in 2006, the Court became less accommodating, insisting that Bush could not create military commissions without Congressional approval. Bush responded by asking Congress to approve his commissions, and Congress promptly obliged, passing the Military Commissions Act of 2006. But then, in the Boumediene case, Breyer joined Justice Anthony Kennedy’s majority opinion holding that Congress could not suspend the writ of habeas corpus in the absence of an emergency, and therefore the procedures Congress created were invalid. Once again, respect for the prerogatives of Congress might have led the Court to construe the Military Commissions Act narrowly: by holding that Congress hadn’t suspended habeas corpus, the Court could have avoided constitutional difficulties. As Breyer acknowledges, “one cannot characterize Boumediene as a case that followed Congressional directions or implemented Congress’s broader purposes.”

So Breyer’s pragmatic attempts to promote inter-branch cooperation in particular cases is sometimes open to question, and the same may be said of his various applications of what he calls the “proportionality principle,” which encourages judges to balance competing constitutional values in cases where they conflict. In interpreting the Second Amendment right to bear arms, for example, Breyer says that judges should balance a handgun ban’s “efficacy, in term of community safety, with the obstacles it imposes to self-defense.”

But you do not have to agree with all of Breyer’s specific conclusions to admire the transparency and the nuance of his judicial approach. This is constitutional interpretation for adults, an interpretive guide that recognizes that the Court has always depended on the other branches to enforce its constitutional vision, and that if the Court refuses to moderate and balance competing values in a candid way, the more polarized political branches are unlikely to step into the vacuum. Breyer is impressive in presenting an integrated theory not only of constitutional interpretation, but of legal interpretation in general, suggesting that judges can interpret Congressional statutes, administrative decisions, prior precedents, and questions involving federal and state relations in a way that promotes democratic deliberation.

Pragmatism, as Breyer defends it, cannot be summarized on a bumper sticker or a poster, and therefore it will never have the political appeal of originalism, which misleadingly—but in politics, attractively—promises simple answers to hard questions. But it is the complex and careful Breyer who accurately describes how the Court has in fact maintained its legitimacy over time: by viewing itself as a partner of the president and Congress, not as some sort of Platonic guardian.

Most important, Breyer’s willingness to present his argument in terms that educated citizens can understand, in the hope of persuading all of us to participate actively in American democracy, exemplifies an idealism about what is possible in a democratic citizenry, and an optimism about it, that is as impressive as it is rare on the Supreme Court.

Breyer’s book might be described as a work of democratic pedagogy, and a very admirable one. Unlike other justices, who have written memoirs emphasizing their upbringings or the personal difficulties they have overcome, Breyer is a teacher in the best sense, offering a tentative framework for evaluating the Constitution and encouraging citizens to debate with him about the details. More than his particular conclusions, it is the power of his intellectual example—his openness to opposing points of view and his recognition of the difficulty and necessity of trying to balance them—that makes his message urgently relevant in, and a kind of balm for, our polarized age.

The Wisdom of Stanley Crouch on Velma Hart and Obama

Stanley Crouch

September 27th 2010//Daily News

The atmosphere of pathos rose in a mist as Velma Hart explained to Barack Obama why she felt exhausted defending his administration. Hart admitted that she had mistaken the President for a superhero capable of knocking a hole in the brick wall stacked up by Republicans.

In later media appearances, Hart said that her faith in the President had diminished but was not gone. Hart considers him an inspirational leader whose health care bill, among other legislation, is proof of his being more about action than hot air.

Of course, Republican commercials will accuse Obama of being so far removed from real American life that he is even losing - though this will not be explicit - his most faithful base: black people. Hart could not have done a better job for the elephants if GOP head Michael Steele had written her statement.

But while some say that Hart has put the handwriting on the political wall of reality, I do not agree.

Obama has two problems: The first is that he has to battle an opposition that has sold out to the extremes of its base in order to gain power, integrity be damned. No matter how mentally unbalanced the charges, the GOP will buy in if those charges draw enough followers. Responsible Republicans like Peggy Noonan, George Will and David Brooks have not been able to hold back the devil dogs foaming at the mouth.

No matter how bigoted complaints about the President have been, Newt Gingrich, Sarah Palin, Glenn Beck and Rush Limbaugh will lift their pitchforks and torches in agreement. They are confident that their so-called facts will not be seriously questioned - but if they are, Fox News will give them plenty of screen time to make a defense.

The second problem is naiveté and cowardice on the other side of the aisle. The far left feels betrayed because Obama has not turned America into Eden in less than two years. They do not believe that Obama has put up a good fight against those opponents who find the truth less important than gaining legislative power that will make it easier to serve their masters: the wealthy, the big corporations and the lobbyists who serve them most faithfully.

Then there are the people whom Velma Hart represents - the black middle class that works hard, hasn't served time in prison and does its best to rear its children well. None of what they've already achieved stops these striving Americans from believing in a black superhero sent to right all of the wrongs against black people and remove remaining obstacles - of which there are still too many - from their path.

Hart admits that this is a simplistic belief, but it is common to Americans at large. It has been bred into us through Greek mythology, the Old Testament and Hollywood. Those all have good stories that should inspire when there is inspiration to be had.

But magical heroes like Hercules who create the illusion of morale that is necessary to stay the path in a tough fight are no replacement for sensible leaders who step down into the mess and get the job done. This is what Hart failed to understand.

It is time for all of us to grow up and face the fact that we just might be "the people we have been waiting for," as the President has said. If we thought we were electing a superhero, we were wrong. Obama never promised that.

But whether the change Obama did promise is possible can only be learned if we decide to stay the course and refuse to give in to fatigue and paralytic cynicism. There has never been a better time to fight than now. The opposition does not expect integrity or anything more than cowardice and bitchiness.

While acknowledging the frustration of people like Hart, we must keep at it and prove those loons on the right absolutely wrong. Republicans might then have the resolve to clean their house of spiritual vermin.

Me:

I don't understand--I do actually, but I don't--all the attention paid to Hart and why anyone thought she said anything remotely telling. She has two kids in private schools and is feeling insecure. Well, she might for starters consider putting her kids in public schools. But that's only a minor part of the point. The larger point is what Crouch says. What would she have Obama do that he hasn't done? As Michael Moore says very affectingly, Obama has a good heart. He has tried hard to make life better for Americans, worked hard and drawn on the best expertise available which looks at matters from his general perpective. What Ms Hart did really was whine that a man, her president, is not, as Crouch bracingly notes, "Superman".

Basman v Nabakov on "The Death of Ivan Ilyich"

After we leave the lawyers’ chambers and see their self-concerned reactions to the news of Ilyich’s death, we are soon at the scene of his standing coffin at his home and then after that begin at the beginning and follow his path to his death, which ends as follows:

“’It is finished!’ someone said near him. He heard these words and repeated them in his soul. Death is finished, he said to himself. ‘It is no more.’ He drew in a breath, stopped in the midst of a sigh, stretched out and died.”

But at the novella’s beginning, proprieties are observed in Ilyich’s associates and colleagues visiting the bier while cadging a later bridge game. Their condolences cover their private calculations on the career possibilities his death my open for them. Ilyich’s widow is mostly concerned with whether she can augment the pension she’ll receive and so grills Peter Ivanovich about it. Once he tells her there’s nothing more for her, she immediately wants to extract herself from his company. So dismissed, Ivanovich surveys the morbid, mawkish scene—satirically rendered—and then gets off to his bridge game, right after the first rubber which makes it easy for him “to cut in.”

We are shown obeisance to form and convention as Tolstoy pillories Ilyich’s family life, his career, his conventional self regard, his social milieu. All are enmeshed in shallow conventionality, concerns with rank and standing, fashion, hierarchy and display the utter absence of interiority. Ilyich is shown in virtual ecstasy in lovingly furnishing his new home after he lucks into a fortuitous promotion and a higher salary, which inevitably won't be, could never be, enough. Also savaged are the absurd doctors Ilyich consults, refusing to answer his simple questions about how sick is he and can he recover, taking sophistic refuge in their seeming technical expertise. Ilyich similarly exults in his official power over others which increases as his legal career advances.

But as his illness progresses, all these preoccupations fall away. He increasingly sees through his home, his furniture, his family's concerns--except for his son--his petty power and vain professional self regard and finally sees all this as a mode of untruth, a mode of illusion. Against all of this, where is there some spark of human truth? The spark is in the figure of the servant Gerasim, who displays wisdom, good cheer, sympathetic insight,an ease with death's prospect and genuine compassion for Ilyich.

So, as the novella progresses, the tone deepens and darkens and the satire of, and jibing against, empty conventionality and bourgeois preoccupation ebbs. What flows into its place is a psychological and philosophical exploration of the meaning of a kind of Job like, arbitrary and inexplicable suffering unto death and of that suffering as a basis for judgment about the meaning of life and how one ought to live his life. Ilyich’s unconquerable pain, causing his suffering, predominates and intensifies and casts an ever larger shadow over the emptiness Ilyich celebrated, and his bourgeois compatriots celebrate, as meaningful. One is reminded of a quietest, detached Lear urging to Cordelia:

“No, no, no, no! Come, let’s away to prison.
We two alone will sing like birds i' th' cage.
When thou dost ask me blessing, I’ll kneel down
And ask of thee forgiveness. So we’ll live,
And pray, and sing, and tell old tales, and laugh
At gilded butterflies, and hear poor rogues
Talk of court news, and we’ll talk with them too—
Who loses and who wins, who’s in, who’s out—
And take upon’s the mystery of things
As if we were God’s spies. And we’ll wear out
In a walled prison packs and sects of great ones
That ebb and flow by the moon.”

At last, as noted at the outset of this note, Tolstoy grants Ilyich surcease from pain in his life’s last moment as he ceases harbouring hatred for his wife and daughter and gains compassion for their lives of delusion, self obsession and unself-knowing. In that compassion Ilyich’s pain gives way to light, only light. He, in the end, wants to ask his family to forgive him but only manages to pronounce ‘Forego”. But “Forego”, has deep thematic resonance suggesting the foregoing of illusion and falsity in the obsession with things and status and the foregoing of hatreds in the capacity for pity and compassion.

Thus, more comprehensive than Nabkov’s religious interpretation of Ilyich, is Tolstoy’s thematic notion of suffering unto death as life’s overarching tragic condition, which ought to compel the virtues Gerasim embodies as fundamental, and thus ought to compel one to live a certain kind of life emodying those virtues.

In a Word: Nabokov on "The Death of Ivan Ilyich"

To quote Nabokov: "The Tolstoyan formula is: Ivan lived a bad life and since the bad life is nothing but the death of the soul, then Ivan lived a living death; and since beyond death is God's living light, then Ivan died into a new life – Life with a capital L."

Ted Belman in Insrapundit says the Arabs are Still Stuck on Rejection

The Arabs are still stuck on rejection

By Ted Belman*

Last week Palestinian Prime Minister Salam Fayyad angrily left a UN Ad-Hoc Liaison Committee meeting and canceled a scheduled subsequent press conference with Deputy Foreign Minister Daniel Ayalon in New York. He did so after Ayalon refused to approve a summary of the meeting which said “two states” but did not include the words “two states for two peoples.”. Ayalon said afterwards “What I say is that if the Palestinians are not willing to talk about two states for two peoples, let alone a Jewish state for Israel, then there’s nothing to talk about, … if the Palestinians mean, at the end of the process, to have one Palestinian state and one bi-national state, this will not happen.” .

Days earlier, at the beginning of a Cabinet meeting, PM Netanyahu said, “The foundation of the state of Israel is that it is the nation-state of the Jewish people,”… “That is the real basis of the end of demands from the state of Israel and the end of the conflict between the two peoples.”

According to TimesLive, “Netanyahu has made recognition of Israel’s Jewish character a central demand, suggesting the Palestinians’ failure to do so means they have not come to terms with Israel’s existence.”

Perhaps this will put to rest second thoughts on the wisdom or necessity of demanding Palestinian recognition of Israel as a Jewish State.

The truth of the matter is that the Arabs are stuck on rejection. They have been since the signing of the Palestinian Mandate in1922. It was “in favour of the establishment in Palestine of a national home for the Jewish people”, and mandated that Britain, the Mandatory Power, “shall facilitate Jewish immigration under suitable conditions and shall encourage, in co-operation with the Jewish agency referred to in Article 4, close settlement by Jews on the land, including State lands and waste lands not required for public purposes.”

In 1921 Britain appointed Mohammad Amin al-Husayni to the post of Grand Mufti of Jerusalem. His goal was to thwart the Mandate in order to secure the independence of Palestine as an Arab state. He led violent riots against the Jews. This violence continued throughout the twenties, thirties and forties.

In 1947, the UN General Assembly passed Resolution 181, known as the Partition Plan, which invited both Jews and Arabs to establish their states in the land designated for them. The Jews accepted the invitation and declared the State of Israel on May 14, 1948. The Arabs rejected the invitation and declared war on Israel. Against all odds, the Israelis prevailed and increased the territories under their control. A cease fire was agreed to at the urging of the Arabs. It was followed by an Armistice Agreement in 1040 which demarked the battle lines with a green pen. Israel was prepared to accept the “greenline” as the final border but the Arabs insisted otherwise.

Thus the Egyptian-Israeli agreement stated "The Armistice Demarcation Line is not to be construed in any sense as a political or territorial boundary, and is delineated without prejudice to rights, claims and positions of either Party to the Armistice as regards ultimate settlement of the Palestine question." A similar provision was included in the agreement with Jordan.

The Arabs insisted that all refugees who had fled the hostilities to other countries be supported by the UN and kept in camps until they could be returned to their homes. Such an unprecedented move was to keep the Arab hope alive of destroying the Jewish state.

In 1967, the Arabs were ready to try again to destroy Israel and began massing their armies east of the greenline. Israel pre-empted and the Arabs suffered a colossal defeat. Israel ended up in possession of the Sinai, the Golan and Judea and Samaria (West Bank). In the wake of the war, the Security Council of the UN passed Res 242 which authorized Israel to remain in occupation of these territories until she could withdraw to “secure and recognized borders”. The resolution didn’t require all territories to be vacated but the Arabs insisted otherwise.

In 1968 the Arab countries meeting in Khartoum passed a resolution declaring “no recognition, no negotiations and no peace” In doing so they rejected Res 242. The Arabs were still stuck on rejection.

A breakthrough finally came in 1979 when Egypt and Israel signed the Camp David agreement in which the parties agreed that they will “recognize and will respect each other's sovereignty, territorial integrity and political independence;”. No mention was made of recognizing Israel as a Jewish state. It wasn’t an issue then. In 1994 Jordan followed suit after transferring all its rights to Judea and Samaria to the PLO.

In 1993, Israel and the PLO signed a Declaration of Principles known as the Oslo Accords. It essentially set out a framework for negotiations in which all final status issues could be resolved. A pre-condition to signing these accords was an exchange of letters by Arafat and Rabin. In Arafat’s letter he wrote,

“The PLO recognizes the right of the State of Israel to exist in peace and security.

“The PLO accepts United Nations Security Council Resolutions 242 and 338.

“The PLO commits itself to the Middle East peace process, and to a peaceful resolution of the conflict between the two sides and declares that all outstanding issues relating to permanent status will be resolved through negotiations.

“In view of the promise of a new era and the signing of the Declaration of Principles and based on Palestinian acceptance of Security Council Resolutions 242 and 338, the PLO affirms that those articles of the Palestinian Covenant which deny Israel's right to exist, and the provisions of the Covenant which are inconsistent with the commitments of this letter are now inoperative and no longer valid. Consequently, the PLO undertakes to submit to the Palestinian National Council for formal approval the necessary changes in regard to the Palestinian Covenant.”

Once again the recognition of Israel as a Jewish state was not in issue. But the PLO and Fatah failed to amend the provisions of their charters both of which called for the destruction of Israel. The Palestine National Council (PNC) also failed to amend the Palestine Covenant. In effect all other commitments in Arafat’s letter were rendered meaningless. Thus the PNC still rejected Res 242 and the recognition of Israel. Furthermore, by their actions and continued resort to violence, the PNC and its successor Palestinian Authority rejected their commitment to a “peaceful resolution” of the conflict. The Arabs were still stuck on rejection.

In 1993, the US reinvigorated the peace process by insisting that both the PA and Israel accept the Roadmap which was just drafted with the approval of Saudi Arabia. Until then, Resolution 242 provided the only parameters for resolving the issues of borders and refugees. The Roadmap also did not require recognition of the Jews as a people or Israel as a Jewish state.

Notwithstanding Israel’s strenuous objections, The Saudi Plan, later amended and renamed the Arab Peace Initiative (API), was included in the Roadmap and it provided totally different parameters for establishing the final borders. It insisted on the greenline as the border with minor exchanges of land; thus requiring Israel to vacate all the territories. The US thereby undermined Israel’s right to defensible borders. The Roadmap required a clear, unambiguous acceptance by both parties of the goal of a negotiated settlement. Its first requirement was that ”Palestinian leadership issues unequivocal statement reiterating Israel’s right to exist in peace and security and calling for an immediate and unconditional ceasefire to end armed activity and all acts of violence against Israelis anywhere. All official Palestinian institutions end incitement against Israel.” Needless to say, the incitement and violence continue until this day.

The API further required withdrawal from Syrian and Lebanese territories occupied and for a just solution to the “Palestine refugee problem” pursuant to GA Res 194. It also required “The acceptance of the establishment of a sovereign independent Palestinian state on the Palestinian territories occupied since June 4, 1967 in the West Bank and Gaza Strip, with East Jerusalem as its capital.”

In effect, the Arabs continued to reject Res 242.

In return for Israel agreeing to all these things, the Arabs would affirm

I- Consider the Arab-Israeli conflict ended, and enter into a peace agreement with Israel, and provide security for all the states of the region.

II- Establish normal relations with Israel in the context of this comprehensive peace.

Totally aside from whether the Arabs countries can be trusted to do so, there was no affirmation of Israel as a Jewish state. Besides, Israel was not about to accept their “take it or leave it” terms.

So why has Netanyahu now insisted on such recognition? Many argue that Israel doesn’t need such recognition and that as a sovereign state she can define herself. Netanyahu first mentioned this requirement in his Bar Ilan Speech. But he wasn’t the first to do so. Former PM Olmert made the demand in 2007 when he told the EU that the foundation of negotiations with the P.A. must be its “recognition of the State of Israel as the state of the Jewish people” – an issue he said was “not subject to either negotiations or discussion.”

Perhaps the reason is that the PLO charter provides,

"The Balfour Declaration, the Mandate for Palestine, and everything that has been based upon them, are deemed null and void. Claims of historical or religious ties of Jews with Palestine are incompatible with the facts of history and the true conception of what constitutes statehood. Judaism, being a religion, is not an independent nationality. Nor do Jews constitute a single nation with an identity of its own; they are citizens of the states to which they belong."

This lie must be retracted.

This recognition is particularly important because it puts an end to the PA demand that for a “right of return” which would if implemented put an end to Israel as a Jewish state. Israel is insisting that all refugees go to the new Palestine state when established and not Israel. Furthermore, one can expect that after an agreement is signed, if it ever happens, that the Arabs will keep up the demonizing of Israel demanding that she give equal rights to Arab Israelis. Hundreds and thousands of Arabs would infiltrate into Israel as illegal aliens and will insist on staying there and so on.

The Arab Peace Initiative is also demanding, according to Res 194, that Palestinian refugees be allowed back into Israel should they want to go there. Their offer of peace is based on Israel not being a Jewish state and the return of hundreds of thousands of Arab refugees to Israel. So the Arabs are still stuck on rejection.

Now for over a year and a half, Abbas has been refusing to negotiate directly. The reason is that negotiations require give and take and end in a signed agreement. Abbas has no interest in such an agreement. He wants to get more concessions such as a building freeze without giving any thing tangible in return. He wants borders imposed on Israel by the UN. Thus he would not have to agree to them. He would just declare a state on what’s left and continue to reject Israel as a Jewish state. The ultimate objective, as their Charter says, is to re-conquer Palestine as defined by the Mandate thereby putting an end to the Jewish state..

Even if Abbas was willing to compromise he has no authority to do so and his agreement would not bind all the Palestinians. Furthermore he would be signing his own death warrant. Hamas and Iran are still adamant that the Zionist state must disappear from the ME.

You see, the Arabs are still stuck on rejection.


______________________________________________________

Postscript from Ted Belman:

...Thanks for posting my article. I noticed that I stupidly missed two incorrect dates which I really know like the back of my hand.

...Could you change 1040 to 1949 (Armistice Areement) and 1993 to 2003 (Roadmap)

I also added this paragraph just after the paragraph about Olmert.

....And before that, GW Bush at the urging of PM Sharon, in conjuction with the proposd disengagement, wrote a letter in 200 in which he wrote "The United States is strongly committed to Israel's security and well-being as a Jewish state." He also ignored the Saudi Plan and wrote "As part of a final peace settlement, Israel must have secure and recognized borders, which should emerge from negotiations between the parties in accordance with UNSC Resolutions 242 and 338." Perhaps he was making amends for its includsion in the Roadmap....

Sunday, September 26, 2010

Dear Harvard Law: Max Pearce Basman

You will remember that I confirmed your acceptance of my grandson Max Pearce Basman: http://http//basmanroselaw.blogspot.com/2009/01/curious-case-of-max-pearce-basman.html Max was three.

Wise decision I wish to confirm. I was this very day reading him, now five, and his brother Leo, three, The Paper Bag Princess by Robert Munsch. I asked them both whether the princess was right to reject, finally, the prince, after beating down the dragon as she did. Max said without hesitation "Yes". When I asked him why he said because, I paraphrase, but it's pretty close, "It doesn't matter what's on your outside, it's what's inside." I asked Max where he learned that. He said, and I paraphrase, but it's close, "I figured it out myself."

By the way Leo took the opposite position and when Max insisted on his own view (already a litigator), Leo started to sulk, with an implicit threat of tears (already an alternative dispute resolution specialist). Something has to change there or the kid's gonna' do his law schooling at Yale.

Cross posted to The Max and Leo Years:http://http//themaxandleoyears.blogspot.com/2010/09/dear-harvard-law-max-pearce-basman.html

A Critique of Pacifism

Pacifism holds the use of force to be morally impermissible. The use of force inflicts a particular outcome upon another being against his or her will, a type of force that causes direct harm to another. Pacifism holds it is morally wrong to use force even in response to violence, based on the high value of human life. No end can justify violence. Absolute pacifism rules all uses of force morally wrong.

To whom does pacifism apply? One argument is that while it is not morally wrong to not live as a pacifist, nonviolence is a morally preferable, doing good that goes beyond the requirements of universal duty.

But different versions of pacifism vary on the degree to which they prohibit the use of force: absolute pacifism at one extreme and just war theory at the other. These two extremes also differ fundamentally in how they relate means and ends when considering the value of human life. Just war theory holds that force may be necessary when there is a goal more important than individual life or well-being. Absolute pacifism denies that any goal can justify the use of force. Is force is permissible if it is not lethal? If so, this view recognizes both the value of human life and practical challenges to absolute pacifism. But it opens up a slippery slope of violent actions.

Don’t all versions apart from absolute pacifism allow for a single human life being overridden by other concerns, and doesn’t every view except absolute pacifism justify force in some situations?

One theory permits the use violence only to defend other people, thus prohibiting force in self-defence. But what differentiates one's self from all other people? And how is a threat to others enough to justify the evil of using force while a threat to self is not?

It may be argued that force may be used only in defence of the defenceless. But doesn’t this say that all should be like the defenceless? So if this rule was universally applied then all would be unable to defend themselves but be obligated to defend everyone else. If we are individually denied self-defence in principle, then the general use of force for defence must be denied.

Each less than absolute theory demonstrates that absolute pacifism is the most coherent and philosophically relevant expression of the pacifist ideal. An answer may be that pacifism is a tactic promoting nonviolence. But the tactical refusal to use force may lead to bad consequences, has the problem of consequential unpredictability and loses principled grounding. After all, just war theorists principle recognize nonviolence is morally necessary in most cases and even when war is just, non violence may be tactically better in some instances.

Absolute pacifism is consistent because it holds simply and without qualification to the view that the use of force is morally wrong, the true spirit of pacifism. Absolute pacifism uniquely challenges violence and force as necessary human interactions. It asserts an individual and social moral ideal and a criterion for judgment.

Absolute pacifism, however, is fundamentally inconsistent. It denies violence, but force often is the only means of preventing it. Violence is wrong, and one has the right to be free of it, a right to prevent it. That right, to be meaningful, entails its functionality, including proportionate force. So that functionality means if a person believes violence is wrong, his use of force is justifiable. A person may choose to reject his own right to self-defense, but, if opposed to violence, cannot in logic refuse to defend the rights of others.

Moreover, the absolute claim that it is always wrong to use force is problematic. Force is not inherently morally right or wrong, say using force to prevent a suicide or harm to an innocent. That would also not necessarily be morally wrong. Absolute pacifism may not privilege human life. And it can conflict with other duties such as the well-being of a friend. It, as a criterion for judgment of actions, can’t resolve conflicts between duties.” A moral account must consider consequences: absolute pacifism asserts its means and ends regardless.


Wonderment:

The absolute claim that it is always wrong to use force is problematic. It is at least equally problematic to claim that it is sometimes right to use force. One big problem is the harm done and its poisonous legacy. Another big problem is that it is impossible to establish coherent rules for the use of violence, except under tightly constrained circumstances that cannot be generalized. For example, no pacifist thinks that it is wrong to shoot someone who has his hand on the detonator that will blow up a hospital, but the minute a situation like that serves as the principle for resolving political conflict, you have utter incoherence and a recipe for disaster (i.e., war).

Force is not inherently morally right or wrong, say using force to prevent a suicide or harm to an innocent. I'm talking about deadly force, not mere coercion. Again, I know of no pacifists who reject all coercion. It is ridiculous, for example, to argue for a society in which we would simply open all the prison gates for violent offenders around the world and see how things go. Gandhi didn't propose that, nor did MLK. Second, pacifism is a combined philsophy of pragmatic political action and ethics. It has to be judged on both grounds, not just one. (The same goes for Just War theory). That provides an additional reason why examples like "What if Charlie Manson invaded your home and was about to carve up your pregnant wife?" are silly. Pacifism does not rise or fall on the action or inaction of a victim suffering a home invasion by a psychopath. Gandhi and King advised passive resistance and civil disobedience in the context of a political movement, in accord with the fundamental principles of that movement.


A moral account must consider consequences: absolute pacifism asserts its ends regardless... That also seems to me to be the case with Just War theory. Warists rarely give a full account of the consequences of their violence. First of all, such consequences are in great part incalculable. But second the immediate causus belli always magically trumps the potential evil of the consequences: If Saddam Hussein takes one step into Kuwait, we will blast him into the stone age with the full support of the UN, consequences be damned. Our "just war" rules (bad guy crosses border and we kill him) have painted us into a corner. Another problem I have with just war theory is that our aggressive impulses are neither morally nor intellectually trustworthy. All participants in wars believe the other side to be evil, wrong, insane, intolerably unjust, etc. But no one who pauses for an instant to seriously think about things can possibly buy into that theory. What you invariably have, even with the Nazis, is a set of historical grievances on both sides, a game of chicken and finally the schoolyard brawl. It is very difficult to argue that wars among nations or between nations and non-state players have a greater logic than wars between prison or street gangs. There's a lot more, but I'll leave it there for now.

Me:

Your first point is that “It is at least equally problematic to claim that it is sometimes right to use force” in response to me asserting, as you quote me, “Moreover, the absolute claim that it is always wrong to use force is problematic.” You stipulate, in responding to a different quote of mine which you cite, that by force you mean “deadly force, not mere coercion”. And your reasoning is the harm and poisonous legacy issuing from the sometimes use of force.

Then you make two different arguments, interrelated to make one point:

1. There are no generalizable rules effectively constraining the use of force; and

2. No pacifist, you say, would oppose the use of deadly force in a ticking time bomb type situation.

There is to my mind something gnawingly out of kilter in these arguments. Firstly, you are equating my sometimes use of force with the absolutist’s never use of force, except when the time bomb is ticking. Secondly, you are confusing, I think, deontological arguments with consequentialist arguments.

Part of the thrust of your latter argument is, as I construe it, that the ineffectuality of war rules to discipline the due use of violence leads not only to harm done and a poisonous legacy, but to those consequences exceeding in the harm done what benefits derive from the use of deadly force, say as in fighting a war. And the implication in this weighing argument is that if the latter outweighs the former, then the latter is justifiable.

That is a different order of proposition from the straight deontological assertion the absolutist makes regardless of the weighing of harms and benefits.

A problem you need to confront in both the moral argument and in your weighing argument is counterfactuals given certain historical examples of aggression. You need to confront defensive wars justifiable on just war criteria and ask yourself, what consequences would flow from a pacific posture. You need to confront instances of genocide, now an emerging causus belli overriding sovereignty in international law, and face the consequences of non intervention on the ground of harm overriding the benefit. You need to confront, for one example, the concreteness of the doomed Warsaw Ghetto and argue for the utilitarian and moral superiority of those Jews’ non violent resistance. And you need to confront, I argue, the example of someone threatening or causing serious harm to your wife, children, others dear and near to you, your fellow citizens even, and argue for the superiority of a pacific stance.

You need to commit yourself, in maintaining your position, to an acceptance of, and living with, all these counterfactual consequences.

Generally, your reasoning here partakes of the fallacy, I think, of the perfect being the enemy of the good, but, here, adapting the fallacy’s terms, of the terrible being the enemy of the even worse.

You cite my statement “Force is not inherently morally right or wrong, say using force to prevent a suicide or harm to an innocent.” I’ll start by agreeing with you on two things:

1. as mentioned, I’m happy to confine our terms to deadly force, not mere coercion, though I observe:

1(1) the seeming bright line between them—sheer lethality—isn’t necessarily coherent—what about bludgeoning and maiming; and

1(2) anything less than lethality as a bright divider leads to a lot of problems with cases being assigned to which side of the line;

2. I’ll stipulate that pacifism is a proactive and philosophical measure, though I observe:

2(1) it seems to me there is a tension, verging on incoherence (in the logical sense), between its absolutism—the underpinning of my entire argument against it— and your characterization of it as pragmatic. If it’s to be used pragmatically, then isn’t its absolute position defined out of existence and then doesn’t it become one more peaceful arrow, one more strategy and tactic, to be chosen amongst an array of means, the others involving the use of deadly force when pragmatically justified?

This last observation goes to your dismissal of what of your own described Charlie Manson example. I’m not sure I understand your dismissal, which is put by you in big terms—“passive resistance and civil disobedience in the context of a political movement, in accord with the fundamental principles of that movement”—the size of which evade, it seems to me, the brunt of the argument you face.

I mean what do these big words amount to? And what consistent principles does this disobedience rest on? I’ve already shown, I suggest, that to the extent your claim is for the pragmatic use of pacifism, civil disobedience—call it what you will—to be used in some situations and not in others, you are no longer a pacifist, just someone who will counsel the use of pacifism in some situations. But if pacifism isn’t to be used in “others”, then what is? Like pregnancy, you can’t be just a little bit pacific in the terms we are mooting.

If the principles of pacifism, non violent resistance, don’t allow for that stance in the face of discrete instances of violence against individuals, as opposed to large instances of violence or oppression, I need to know why in principle. Why don’t the moral imperatives and first principles driving its philosophy negate the use deadly violence in the more homely instances of violence, when that use would be proportionate in the circumstances?

Interestingly, there is a telling, though not complete, consanguinity between just war theory and the criminal law of self defence. Self defence becomes a criminal act when the use of force is not suitable, reasonable or proportionate in the circumstances. I suggest to you that as against your dismissal of a discrete instance of harm—Charlie Manson, or, say, not a psychopath—there is a principled relation between the use discrete use of force in the homely example and the use of it—war essentially—in the more grand examples

Sam Harris on Pacifism from "The End of Faith"

(as edited by me)

Pacifism is generally considered to be a morally unassailable position to take with respect to human violence. The worst that is said of it, generally, is that it is a difficult position to maintain in practice. It is almost never branded as flagrantly immoral, which I believe it is. While it can seem noble enough when the stakes are low, pacifism is ultimately nothing more than a willingness to die, and to let others die, at the pleasure of the world's thugs. It should be enough to note that a single sociopath, armed with nothing more than a knife, could exterminate a city full of pacifists. There is no doubt that such sociopaths exist, and they are generally better armed.


Gandhi was undoubtedly the twentieth century's most influential pacifist. The success he enjoyed in forcing the British Empire to withdraw from the Indian subcontinent brought pacifism down from the ethers of a religious precept and gave it new political relevance. Pacifism in this form no doubt required considerable bravery from its practitioners and constituted a direct confrontation with injustice. It is clear, however, that Gandhi's nonviolence can be applied to only a limited range of human conflict.

We would do well to reflect on Gandhi's remedy for the Holocaust: he believed that the Jews should have committed mass suicide, because this "would have aroused the world and the people of Germany to Hitler's violence." We might wonder what a world full of pacifists would have done once it had grown "aroused" - commit suicide as well? Gandhi was a religious dogmatist, of course, but his remedy for the Holocaust seems ethically suspect even if one accepts the metaphysical premises upon which it was based.

If we grant the law of karma and rebirth to which Gandhi subscribed, his pacifism still seems highly immoral. Why should it be thought ethical to safeguard one's own happiness (or even the happiness of others) in the next life at the expense of the manifest agony of children in this one?

Gandhi's was a world in which millions more would have died in the hopes that the Nazis would have one day doubted the goodness of their Thousand Year Reich. Ours is a world in which bombs must occasionally fall where such doubts are in short supply. Here we come upon a terrible facet of ethically asymmetric warfare: when your enemy has no scruples, your own scruples become another weapon in his hand. It is, as yet, unclear what it will mean to win our war on "terrorism" - or whether the religious barbarism that animates our enemies can ever be finally purged from our world - but it is all too obvious what it would mean to lose it.

Life under the Taliban is, to a first approximation, what millions of Muslims around the world want to impose on the rest of us. They long to establish a society in which -when times are good- women will remain vanquished and invisible, and anyone given to spiritual, intellectual, or sexual freedom will be slaughtered before crowds of sullen, uneducated men. This, needless to say, is a vision of life worth resisting. We cannot let our qualms over collateral damage paralyze us because our enemies know no such qualms.

Theirs is a kill-the-children-first approach to war, and we ignore the fundamental difference between their violence and our own at our peril. Given the proliferation of weaponry in our world, we no longer have the option of waging this war with swords. It seems certain that collateral damage, of various sorts, will be a part of our future for many years to come.

Saturday, September 25, 2010

A Word on the Great Mickey Rourke

Captain Charisma//How Mickey Rourke became irresistible again.
By Dana Stevens//Posted Thursday, Feb. 19, 2009//Slate

It's best not to hazard a guess as to whether Mickey Rourke will pick up a best actor Oscar for The Wrestler this Sunday night; the odds have him losing to Sean Penn, but it wouldn't be the first time this sly, mercurial, irreplaceable actor has overturned everyone's expectations. As Rourke awaits his big moment (though, in fact, if the portrait of him that's emerged in recent interviews is accurate, he may not give a shit about the outcome—he's just enjoying the ride), I want to revisit the role in which many of us first noticed him and in which I remember him best.

I don't propose to provide a full survey of Rourke's career: Sheila O'Malley has done that, definitively and beautifully, in this chronicle of her long-standing love affair with his work. But for those who haven't had the chance to confirm Rourke's talent via Netflix lately, let me just state that your fond memories of Diner are not wrong. In the 27 years since its release, I'd come to assume that the movie couldn't justify the level of affection I had for it. Diner had congealed in my mind into a kind of feature-length Happy Days, a cutesy time capsule of '50s nostalgia. Whenever I'd think about Diner (or hear any of the songs from its glorious soundtrack, especially the novelty hit "Ain't Got No Home"), I'd feel an almost guilty glow of well-being that somehow never translated into the desire to see it again: I didn't want to be let down, to discover it was never as good as I'd thought.

Something similar seems to have happened with our cultural memory of Mickey Rourke. When he got so embarrassing in the early '90s—if I had to date it precisely, I'd say 1991, the year of Desperate Hours and the deeply unfortunate Wild Orchid—it was as if we had to forget why we once loved him so much, to downgrade his image retroactively in our minds. Adrian Lyne, the director of 9½ weeks, once said that "if Mickey had died after Angel Heart, he would have been remembered as James Dean or Marlon Brando": a gifted young man too beautiful to live.

Instead he aged, drank, took bad roles, made stupid decisions, and eventually disappeared from sight. If you've picked up a periodical or clicked on a Web site in the last few months, you know about Rourke's brief, doomed career in boxing, his decade or more on the skids, and his unlikely resurrection in The Wrestler. But the arc of his comeback is hard to appreciate unless you peel back the layers of '90s cheese and look again at what Rourke was in the '80s: the freshest, most vivid, most exciting actor around.

So, then: Diner (1982), Barry Levinson's first and best movie, a wistful comedy about a bunch of young men in Baltimore in the winter of 1959. Though the ensemble acting is perfection (has Daniel Stern ever been so well cast? Ellen Barkin certainly hasn't), it's Rourke's movie before he even appears on-screen: In the opening shot, Modell (Paul Reiser) enters a crowded party, looking for Rourke's character, Bobby "Boogie" Sheftell. "Have you seen Boogie?" he asks as the camera tracks him through the jitterbugging crowd. "Have you seen Boogie?" And then, in the distance, we see Boogie—just standing around like everybody else, but clearly the guy to know at the party. As O'Malley cannily points out, Rourke always plays that guy, the one to know: "We all know guys like that, guys who are not famous, but who have a glitter to them, something 'extra.' "

That "something extra" is apparent in everything Rourke does in Diner: the loving sadness in his eyes as he watches his developmentally arrested buddies bullshit around the diner table, or the strange, almost feral way he suddenly pours half a dispenser's worth of sugar into his mouth as he sits at the diner counter, chasing it with a swig of Coke. By way of illustrating what made the young Rourke such a marvel to watch, it's worth doing a close reading of one of Diner's raunchiest and yet tenderest moments, known among Diner-heads (oh, they're out there) as the "pecker in the popcorn" scene.

The setup: Boogie has made a bet with his pals that he can get local beauty Carol Heathrow to "go for his pecker" on the first date. Unbeknownst to the guys, Boogie has a lot riding on this bet: He owes his bookie $2,000, and things have started to get ugly. So he stacks the deck against Carol: As they sit together in a Sandra Dee movie, he maneuvers his manhood through the bottom flap of the popcorn box on his lap, so that she unwittingly touches it while reaching in.

Like most of the rest of Diner, the "pecker in the popcorn" scene is a single, long-form, punch-line-free joke, and it's irresistibly funny. But the moment I want to show you comes just after (beginning around the 3:30 mark in this clip), when a grossed-out Carol flees to the ladies' room and Boogie follows her. He proceeds to win back her trust with a preposterous (and physiologically impossible) lie about how the pecker got in the box. The multiple and conflicting motivations at work are a Thanksgiving feast for any actor: Boogie must win Carol's affections back by faking boyish vulnerability. But we, the audience, know that Boogie truly is vulnerable; he needs that $2,000, not to mention the esteem of his friends, and he's using every tool in his toolbox—the gentle, self-deprecating smile, the feigned embarrassment at his disingenuous "confession"—to maneuver Carol back into the movie theater and eventually to bed. He's a ruthless manipulator—and yet we still like Boogie so much that we pray he'll pull it off.

That's the thing with Rourke: He always plays the counter-emotion beneath the emotion, the anti-intuitive expression or gesture. (In his lazy midcareer period, this came to look like a reflexive tic—instead of seeming to hold a part of himself in mysterious reserve, Rourke simply seemed to be not trying.) But there were bad movies in which Rourke still managed to be great:style="font-style:italic;"> 9½ Weeks
is muddled and witless, nowhere near as sexy as it thinks it is.

But behold the striptease scene, in which Kim Basinger strips to a Randy Newman song as Mickey watches in a bathrobe, eating popcorn and smoking a cigarette. It's such a stylized '80s scene—the silhouetted blonde in a doorway, the white soul on the soundtrack—that it borders on being an MTV music video. But Rourke undermines the slick voyeurism by laughing in pure delight at his lover's performance. A few years later, in soft-core-porn trash like the unwatchable Wild Orchid, Rourke would caress his soon-to-be-wife Carré Otis with a grimacing solemnity meant to be "erotic." In 9½ Weeks, he sketches a whole relationship—a sick one, yes, but affectionate too—with a laugh.

Pauline Kael wrote a legendary review of Diner—legendary because when it appeared in The New Yorker, the studio was contemplating shelving the movie, and Kael's rave was rumored to have helped secure the film's release. In it, she singles out Rourke for praise that, in retrospect, breaks your heart: "The sleaziest and most charismatic figure of the group is Boogie, played by Mickey Rourke. … With luck, Rourke could become a major actor: he has an edge and magnetism, and a sweet, pure smile that surprises you. He seems to be acting to you, and to no one else." Of course, Mickey Rourke never had that kind of luck, or maybe he had it and threw it away. But he's finally becoming the actor those early appearances promised, and his smile still goes right through you.

No Sex, the Drugs of Illogic and Rock and Roll

Die Elvis Die: Two Broadway musicals prove that boomer nostalgia is even more eternal than Elvis.

David Hajdu//September 25, 2010//TNR

Memphis//Shubert Theatre

Million Dollar Quartet//Nederlander Theatre

Anyone in denial about the demise of the record business will find on Broadway these nights proof of death more conclusive than the disappearance of music stores from the malls or the elimination of DJs from radio stations. Two musicals staged this year—Memphis, which won the Tony Award for Best Musical, and Million Dollar Quartet, which is set in the same city in the same period and deals with many of the same themes—verify the extinction of the old-school music industry by showing it to exist now solely as sentimental myth. Having always been a middlebrow outlet of attraction for middle-class tourism, Broadway excels at packaging nostalgia as popular art. Like the vendors set up under their marquees, Broadway theaters are in the souvenir business, and the hollow plastic objects presented as rock musicals at the Shubert and Nederlander theaters are mementos of a pop culture past that never really was.

Both Memphis and Million Dollar Quartet are set in a time a bit more than fifty years ago—that is, a bit more than fifty years after the year 1900. They take place at the equinox of a hundred-plus-year period that most of us today tend to conceive as comprising the whole history of popular entertainment. (According to Playbill, the time of Memphis is “The ’50s,” and Million Dollar Quartet is loosely based on an actual event that took place on December 4, 1956.) Both shows deal with the early days of rock and roll. Memphis involves a fictional disc jockey inspired by Dewey Phillips, a real-life white boy who spun black music for teenagers of both races; and Million Dollar Quartet centers on the record producer Sam Phillips, another white Memphis music-business professional with a shrewd interest in black culture, who launched the careers of the show’s titular quartet of Elvis Presley, Carl Perkins, Jerry Lee Lewis, and Johnny Cash. (The two men named Phillips were not related, though they were friends and, briefly, business partners.) The world of these people is as distant from contemporary experience as vaudeville and silent movies were from life in the mid-1950s. Indeed, the parallel stories of Memphis and Million Dollar Quartet have about as much cultural resonance today as The Vamp, a show about a silent star played by Carol Channing, and Ankles Aweigh, a throwback to the burlesque era, had bearing on their day when they opened on Broadway in 1955.

To aging baby boomers, whose availability of time and cash make them ideal Broadway customers, the focus of these shows on the birth of the modern musical age has a flattering allure. It reinforces a solipsism that hardly needs the boost, advancing yet again the boomers’ fantasy that the world was born with them. I say this with no special pride in being over fifty myself. I don’t mind the age, at least not most of the time; but I feel embarrassed sometimes to be part of a generation whose vainglory is such that we fill theaters every night to see shows aggrandizing the circumstances of our enthusiasms—even worthy enthusiasms like rock and roll—in the heroic terms of myth. Elvis may have been “the king,” but he was not God, and it is the arrogance of my own age to think that everything of value and meaning arose with his kingdom just because we came along at the same time.

Memphis serves as the setting of both shows because of its critical function in rock history as the provenance of both Dewey Phillips’s show on WHBQ radio and Sam Phillips’s label, Sun Records, which recorded the pivotal white rockers as well as their black predecessors Muddy Waters and Howlin’ Wolf, among others. For the prologue to Last Train to Memphis, the first book in his fine two-volume biography of Elvis, Peter Guralnick chose to describe the day in 1950 when Sam Phillips first met Dewey Phillips. In Guralnick’s elegant and affectionate account, Sam “wanted so badly to meet” Dewey to share their common passion for African American music and culture. Sam Phillips explained to Guralnick that it was “genuine, untutored Negro” music he loved, and the musicians he sought were “Negroes with field mud on their boots and patches in their overalls [and] battered instruments and unfettered techniques.” As Guralnick points out, “The music that he was attempting to record was the very music that Dewey Phillips was playing on the air.”

Needless to say, neither of the Phillips men was looking for a singer like Paul Robeson, who was educated at Rutgers and Columbia; or a musician like Louis Armstrong, who had rather fettered technique and who played fastidiously cared-for trumpets; or a composer like Duke Ellington, who would not have gone out in public with a patch on his socks. Sam Phillips, like Alan Lomax before him, performed an invaluable service to posterity by documenting the art of African Americans, as well as that of their white peers and imitators, who were working in a vernacular milieu widely seen by most whites as disreputable—Phillips doing so for small fees to the artists, Lomax under contract to the Library of Congress. Muddy Waters knocked on Phillips’s door and Phillips let him in. Yet Waters, as the one doing the knocking, represented opportunity for Phillips as much as Phillips did for Waters. Moreover, the propositions that “untutored” music is more “genuine,” and that filth or deficiency confer authenticity, have always been racism and classism passing for egalitarianism.

Million Dollar Quartet is set in the Sun Records studio at 706 Union Avenue in Memphis, a storefront box considerably smaller and even blander than the set that simulates it on West 41st Street in New York, and the show takes as its inspiration the happenstance events of a Tuesday afternoon long enshrined in rock legend. Elvis Presley, who had already left Sun for RCA and was the hottest thing in the free world, was driving around Memphis with one of his girlfriends, a Vegas showgirl named Marilyn Evans. He noticed cars parked in front of the studio, and the two of them dropped in. Carl Perkins was wrapping up a recording session with his brothers Jay and Clayton on guitar and bass, “Fluke” Holland on drums, and a then-unknown session pianist, Jerry Lee Lewis. For the next couple of hours, Elvis and the boys jammed, just for the kick of making music together—or trying to. Sam Phillips, ever attuned to opportunity, kept the microphone on and the tape rolling.

At one point—how much of the session had progressed has never been clear—Johnny Cash, who was the biggest name on Sun Records at the time, though hardly a rock and roller, came by. Cash, in his memoirs, claimed to have gotten to the studio first. Other accounts have him arriving late in the proceedings, with his wife, to pick up money for Christmas presents. A few additional people, including Charles Underwood, a songwriter working for Phillips, and Cliff Greaves, a minor rockabilly singer, may or may not have played or sung. The uncertain facts, the conflicting stories, and the spottiness of the surviving evidence on tape all contribute to the mystique of the recordings as sacred text.

Several editions of the sessions have been released in various formats over the years, including a “complete” version with forty-seven tracks, currently available on CD and iTunes. The music is wonderful for not being monumental. It is ragged, casual, joyous, endearingly imperfect, and not for a moment phony or inflated. Elvis dominates, singing lead on all but a few songs and calling the tunes—many of them gospel numbers he no doubt assumed all the fellows had learned in church, such as “Jesus Walked That Lonesome Valley” and “Just a Little Talk with Jesus.” They goof around with “Jingle Bells” and “White Christmas” and try three or four times to pull together Chuck Berry’s “Brown-Eyed Handsome Man.” Elvis and Carl Perkins practically fight over who holds Berry in deeper awe. I think no recordings of Elvis, not even the canonical original Sun Sessions, capture him so free and true—unfettered in the best way, in joy, rather than in the way of his final years, in druggy indifference. (I have never been able to hear Johnny Cash on these recordings, though the fault may be my own. Cash always insisted he had been singing with the group, but at a distance from the mike and in a higher pitch than usual for him, to accommodate Presley’s keys, and I will not call a dead man a liar.)

The goings-on that Tuesday in Memphis probably have the makings of an eye-opening evening in the theater, but after Million Dollar Quartet we may never know. The show uses the fact that Presley, Perkins, Lewis, and Cash were all in same room one day to justify the staging of what amounts to an open studio concert of the four singers’ greatest hits (Perkins’s “Blue Suede Shoes” and “Matchbox,” Presley’s “That’s All Right” and “Hound Dog,” Lewis’s “Great Balls of Fire” and “Whole Lotta Shakin’ Goin’ on,” Cash’s “Folsom Prison Blues” and “I Walk the Line,” none of which any of them did on December 4, 1956). All the music in the show is sung and played on stage, not lip-synched—a fact that an announcer emphasizes before the opening curtain—and it is delivered with polished fervor by four skillful performers who wisely leave enough of themselves in the work to avoid outright impersonation. They prevent their creamy-smooth, milky-white act from hardening into cheese. Still, the whole thing is not much more than a Vegas-style tribute show best suited to the city of Elvis’s undoing.

A narrative of sorts, threaded between the songs, has to do with Sam Phillips’s devotion to recording the music that he loved. “This is where the soul of a man never dies,” the Phillips character says in a line that came from the real Phillips and suggests, in its use of “soul” to evoke both the human spirit and blackness, the tension in Phillips’s way of paying tribute to African American music by having white people record it. Near the end of the show, the playwrights Floyd Mutrux and Colin Escott (the latter has written a couple of good books on American vernacular music) have Presley and the boys join in a toast to Phillips: “Here’s to the father of rock and roll!” Even after an hour of watching the glorification of Phillips for his having recorded a quartet of four great white voices of the early rock era, the line struck me as dubious, as if fatherhood were a matter not of who planted the seeds—in the case of rock, that would be black R&B artists such as Roy Brown, Wynonie Harris, Ike Turner, and many others—but of who took the baby pictures.

Foremost among those whom Sam Phillips admired was the person who did pretty much the same thing he did, but on the radio: Dewey Phillips. As Guralnick quotes Sam saying of Dewey, “He was a genius, and I don’t call many people geniuses.” Three blocks north of Million Dollar Quartet in Times Square, Memphis uses Dewey Phillips as the starting point for an original musical about the rise of rock in postwar Memphis. Dewey, with nominal tweaking, has become Huey, who, like his inspiration, is a hyperactive, undereducated former department-store record clerk who carries his love for black music over the airwaves, winning both the ire of the white establishment and the hearts of its teenage offspring. Further amours enter the show with the introduction of a sexy female singer who neatly stands in for the earthy, carnal dangers that black music represents to the narrow-minded whites of America in the ’50s and to the show’s equally unenlightened creators.

Memphis was written by Joe DiPietro—a librettist best known for his work with composer Jimmy Roberts on the poppish Off-Broadway musical comedy I Love You, You’re Perfect, Now Change—and David Bryan, the longtime keyboard player for Bon Jovi. Musically, the show is proficient, often pleasant, and occasionally fun, but for the most part it is numbingly derivative. Every song, and I mean every song, sounds vaguely and sometimes not-so-vaguely like something else. The big ballad, “Love Will Stand When All Else Falls,” is essentially a reworking of “You Light Up My Life.” “Colored Woman” sounds like an outtake from a mid-’70s John Lennon album. Half a dozen of the tunes feel like B-sides of Blood, Sweat & Tears singles, and most of the rest remind me of the things that sitcom soundtrack composers used to throw together when the script called for a song to be played on a radio in the show. They seem almost but never enough like actual songs.

It would be unfair to fault this music for sounding wholly unlike the R&B and early rock and roll that DJs such as Dewey Phillips and Alan Freed, Phillips’s more celebrated counterpart in Cleveland, played on the radio. The songs of Oklahoma! don’t sound like Dust Bowl ballads, and Memphis is supposed to be an original musical. Its failing is its lack of originality. Dewey Phillips, with his ear for idiosyncrasy, would probably not have played these songs.

Like Million Dollar Quartet, Memphis offers a discomfitingly archaic kind of hero: a white man blessed not only with special access to the mysteries of the black world, but also with the power to share the magic of black art with the white masses. For both Huey/Dewey and Sam Phillips, it is white privilege that makes them heroes. Today the particular mechanisms of their privilege—records and radio—seem almost archaic, too. In the contemporary era of social media and aggregated information, there can be no Hueys or Deweys or Sams, and Million Dollar Quartet and Memphis make that seem almost like good news.

Me:

I'm confused by the opening of this rather discursive review, which seems not to know exactly what it wants to argue or consistently say. Is it a non sequitur that the "record business" such as it was in the fifties is dead?

I think so.

What isn't a non sequitur, and what disproves the thesis wrong -headedly and temporarily being mounted in this review's opening--only for Hadju to get disconnectedly on to other things--is that early rock and roll, as distinguished from the record business of its time, and as manifest in the likes, for examples, of the King and the Killer, is perennial.

It's that great music and the fascinating circumstances of its coming together with the essential help of such great guys as the Phillipses, Sam and Dewey, that seem to be the gist of the two productions. And Broadway middlebrow schmaltz, if that's what they amount to--I haven't seen them, but I would, eagerly--can't touch or diminish that perennial freshness. Which is quite the opposite of Hadju's confused, cart-before-the-horse thesis, which conflates the greatness of this burgeoning music with its commercial context.

Hadju reasons B therefore A, when B has no necessary relation to A, even if he had properly identified A.


As a kind of p.s.: Guralnick throughout his work on he Blues argues, and I agree with him, that the Blues, rooted in the earthly--putting it mildly, (filth and deficiency anyone?)--realities from which it sprung, was exactly by the conditions of its genesis and development accordingly genuine and authentic, and more so than some other musics. How could anyone think not?

This is hardly classism posing as egalitarianism. Thinking it is, is to mistake the claim in its concrete particularity with some kind of moronic abstraction--which is the last thing Guralnick asserts.