Saturday, March 31, 2012

Anecdotal Evidenz: The “trash in the ER” argument; and my dealings with CPG (1993-94), Part 1 of 2

[Hate to make excuses, but this has been a difficult blog entry to prepare, and Part 1 here may seem a little turgid and derivative in looking at an old article in JAMA; and the article’s analysis may be dated in several ways. But Part 2, whenever it’s available, will make this Part 1 make a lot more sense as preparation of the ground. And in any event, amid the cacophony of discussion of Obamacare, no one seems to have all the answers, so take it easy in assessing this entry. (Edited in tiny ways 5/1/12.)]


One argument that has gotten brought up in a wealth of circumstances, and seems to inspire a passionate position, is what I call the “trash in the emergency room” argument. In 1994, when the Clinton plan was being passionately debated and eventually voted down, this sort of argument was brought up—maybe not by politicians, but by grassroots types—among conservatives who were against the Clinton plan. They claimed the (or one) impetus for the plan was all the feckless and irresponsible slobs who only got health care when they had an emergency and went into the ER, and this is what drives up costs, yap yap yap.

Some eighteen years later, liberals have dragged out this dead dog of an argument to support why they want to keep the individual mandate in Obamacare. This particular contention is the sort of argument that makes me embarrassed to identify myself as a Democrat. And people will use it to make points with logic or a sense of facts that I wouldn’t have: for instance, a letter writer to the Star-Ledger, the New Jersey daily, on May 28 said that people who refuse to get insurance thereby embody the “I’ve got mine and the hell with everyone else” argument. Actually, if you asked me, the ones who say “I’ve got mine…” are those with employer-given Cadillacs of health plans, who don’t want to hear about those who can’t get insurance, or adhere to some solution that will benefit the system, because they’re just fine with how they milk their own plan for every little sniffle, and slobs without insurance can go scratch….

This is all the rhetorical side of the question of “What to do about the umpty-ump million uninsured?” As with so much else, whether in how I personally prefer to deal with entrenched problems on a “case study” and phenomenological basis, or in how others look at concrete cases journalistically, seeing the challenges to getting everyone insured in concrete instances—and why someone would opt not for group plans—shows just how complicated the situation is, both due to individual circumstances and due to the labyrinthine nature of the “system” that’s involved.

I. Why some of the uninsured remain uninsured: personal observations; and the bugbear of preexisting conditions

First of all, as long as I was uninsured because my family never had insurance after my father died (and indeed before he died, too), you found you could come to live very well without it, when you’re healthy and your health care is usually easy to pay for out of pocket. In fact, you come to see that it’s not wise, when your health allows you to do this, to dump money into paying for insurance, when your monthly (and yearly) expenses if you just paid for care you needed would be much cheaper on average than paying a premium PLUS paying for the occasional treatment that your insurance may not cover. (Similar stories were recounted in a New York Times article of May 28, starting on the front page.)

But as you get older, you feel that it’s wiser to get insurance. Especially when you’ve had a family history of such problems as heart attacks, as my family has had, you think you need insurance at least for such a catastrophic event as having a heart attack and needing the expensive care that prompts.

But then you can encounter problems with getting insurance, because, for instance, when older, you get looked at by an insurance company as a different kind of risk than if you ingenuously started insurance as in the case of some fresh young heart just out of college and starting a new job. Usually the risk you get measured by has been if you have a preexisting condition. And as we have heard many stories of for decades, the way a preexisting condition can get labeled and cited as a reason to deny coverage can be like the flimsiest of cheap-assed pretexts.

The broader issue is that if the Obamacare plan is looking to have private insurance plans take up a lot of the uninsured in this country—rather than the country adopting a “single payer” system for everyone, which seems politically unfeasible—then, regardless of what rules are set up, we are at the mercy of an industry that is about as apt to follow the straight and narrow as a side-winding snake. And what kind of regulatory and consumer-helping system is there in place to deal with an insurance company throwing you like an angry horse?

Before I turn to my colorful anecdote from 1993-94, let’s look at some authoritative discussion of the health-care reforms that New Jersey was enacting by 1993, to understand some of the background—both to my story, and to how people were apt to discuss health-care issues then. A fair amount will seem remarkably reminiscent of some of the issues being addressed today. One key point I will focus on is how much interpreting the constitutional issue of the individual mandate as solving a more systemic problem tends to be gainsaid by the New Jersey situation of almost 20 years ago.


II. New Jersey’s high-level legal situation on health reform in 1993

1. Preface: In 2012, the Commerce Clause and the between-states issue are beside the point regarding the possibility of insurance shenanigans

One of the things that rather astonish me, as I go through recent things written about the Supreme Court review of the individual mandate, and gather my various resources together that I built up years ago regarding health-insurance reform in New Jersey in the 1990s, is how given to broad-stroke formulations, naïve a priori assessments and “solutions,” and un-practical notions the recent debate has been. Granted, the individual-mandate issue is one aspect of a massive bill, and the Supreme Court has to decide on a constitutional—meaning, fairly general—question. But as if this even needs to be argued, and as if a peon like me should have to point it out based on his little postage stamp of complex experience on this, if there is one thing the law tries to address that should be dealt with in its complex and practical-implications reality, it is the U.S. health-care system. To do otherwise would be like, if people were to address what makes an elephant an elephant, instead of talking about large size, wrinkled skin, flat feet, big ears, tusks, long trunk, and so on, people focused only on tusks.

In making the argument I am here, because there are so many aspects to address with the health-care morass, I will try to narrow my argument down to essentials, and hopefully not be as oversimplifying as some people have been in the recent debate.

2. An old farm-subsidy case cited to support health mandate

One of the weirdest things has been to talk about the Commerce Clause in the U.S. Constitution from the standpoint of a 1942 U.S. Supreme Court decision that, according to one op-ed piece, had to do with controlling wheat prices by limiting growing of wheat.

In an op-ed piece in The Star-Ledger (March 26, 2012, p. 4), authors Leslie Meltzer Henry (an assistant professor of law at the Carey School of Law of the University of Maryland) and Maxwell L. Stearns (professor of law at the same law school) say, “The Supreme Court…has relied on the Commerce Clause to strike down state laws that thwart national interests.” Regarding the individual-mandate issue, “Before the ACA [Affordable Care Act, i.e., the Obamacare act], seven states demanded that insurance companies cover high-risk individuals, but without imposing an individual mandate. The results were predictable and frustrating. Absent a meaningful quid pro quo for the additional coverage obligation, insurers pulled out. Leaving the problem of the uninsured to state regulation risked a separating (rather than a pooling) outcome in which high-regulation states drive out insurers but attract high-risk individuals.” The result is, as they say, “Health insurance produces a separation among states…,” and presumably high-risk individuals who are not covered in one state would have to go to another in hopes of getting coverage there. The individual mandate, they seem to imply, would do away with this between-states disparity.

The more I’ve considered this argument—aside from looking at certain empirical issues that are germane to it—the quainter it has looked. It seems like a good example of law professors coming up with a seemingly cogent argument based on a lot of abstractions and not showing a lot of familiarity with the super-complex and irregularity-rich area of the real-world health-care system.

3. New Jersey’s experience with reform in 1993, showing limits within one state

New Jersey voted in health-care reform in late 1992, with the law taking effect at different levels in later 1993 and early 1994. I have an anecdote that gives a very vivid example of what can go wrong when you enact health-care reform that seems to clean up some messes and leaves others untouched (or in some way creates new ones). But let’s look at more general observations, made in no less prestigious a publication than the Journal of the American Medical Association (JAMA) on what happened, or seemed likely to happen, with the 1990s New Jersey reforms.

Joel C. Cantor, Sc.D., writing in the December 22/29, 1993 issue, notes that the New Jersey reform package was passed with a government divided between two parties and political philosophies—a Republican-controlled legislature and a Democratic governor (the opposite of the situation now). The package was also prompted by a court case pursued by unions with respect to the federal ERISA benefits law. “The political reality in New Jersey, as well as in other reform-conscious states, is that finding new sources of revenue for uncompensated care is not a viable option. Moreover, the deep ideological divide between the free market–oriented legislature and regulation-oriented governor meant that the entirety of New Jersey’s health care financing system, not just uncompensated care funding, was subject to negotiation” (p. 2968, 3rd col.). The result “catalyzed the undoing of a successful system that had increased access to hospital care while it controlled costs” (p. 2970, 1st and 2nd col.).

A large part of the phenomena Cantor examines and bases his critique on had to do with the undoing of the “DRG”—diagnosis-related group—way of setting prices, which was tied to, among other things, covering uncompensated care. This is an area I always had to work to understand, and anyway I can leave it aside in my discussion here. One of the four consequences of the 1993 reform Cantor discusses—the change in the DRG system being only one—is germane to the 2012 individual-mandate issue: creating new ways to support insurance for those who otherwise might have had trouble getting it. “The third part of the New Jersey compromise, the introduction of the subsidized insurance program, is a good idea with the potential to increase access to care, but it is at high risk of failure” (p. 2969, 3rd col.). The state’s idea to fund the subsidized program was out of the unemployment insurance surplus. Cantor says, “Financing out of the apparent surplus in the unemployment insurance fund may seem painless now, but its future is uncertain. The use of these funds is being challenged by labor unions and could still be deemed inappropriate…” (p. 2969, 3rd col., p. 2970, 1st col.).

Moreover, looking more broadly, “[t]he history of other states’ tax-subsidized plans similar to the reformed New Jersey approach does not leave much room for optimism” (p. 2970, 1st col.). “A number of experiments of such subsidized insurance plans, most of which target small businesses, were recently completed. Many of these projects were not successful because of the realities of state budgets and a lack of political will to continue public subsidies to narrow constituencies. Others did get off the ground successfully as demonstrations. However, these ‘successes’ are sobering. Few states have expanded their demonstration statewide without strict limits on enrollment” (Ibid.).

“Further, none of the state small-business–oriented demonstrations reached more than 17% of previously uninsured businesses in their first 2 years of operation…. Firms enrolling in insurance through demonstrations have higher-than-average revenue per employee, implying that such subsidies are not well targeted to lower-income workers” (Ibid.). The phenomenon seemed to be what has been seen likely to happen with insurance coverage through thick and thin: if it’s left up to employers to provide it via private (non-governmental) insurance companies, those employers with the money and will to do so will provide it. Not everybody gets coverage.

The net result, after Cantor reviews other evidence as well, is: “In short, New Jersey has replaced essentially universal coverage for inpatient care [as the DRG system helped facilitate] with a plan that, even if it survives politically, is likely to provide coverage to a small proportion of the uninsured population” (p. 2970, 1st col.).

If professors Henry and Stearns thought that the big answer of the big issue of the day was to, in important part, prevent high-risk individuals from going from state to state to get coverage, via the holy legal tool of the Commerce Clause, they didn’t calculate what might happen within states when, instead of a federal single-payer insurance system or expanding Medicare, you rely on a host of private insurance and other plans to cover people. Health insurers have long found ways not to cover certain people. You will simply have situations where, within states, certain little pools won’t cover everyone—whether an employer’s group plan or an association’s group plan—and the uninsured person at hand will have to go somewhere else, and probably run into other brick walls. Such a person will become a hot potato, in effect, passed around from one small group to another that doesn’t want to cover him or her.

4. The element of preventing coverage-denial based on preexisting condition

Another of the easy a priori statements that have been made lately for supporting the individual mandate provision in the ACA is that you cannot have a reform like preventing non-coverage of people for preexisting conditions without it. Funny, but this was not the logic—or the factual grounds for a sense of doom for reforms—in New Jersey during the period Cantor discusses. Cantor notes, “The new [1993] insurance rules limit the practice of excluding groups or individuals based on their medical history, restrict the degree to which insurers may charge higher premiums based on medical claims history or related factors, and simplify the health insurance market for individuals and small groups” (p. 2969, 1st and 2nd col.).

Later he comments, “The final piece of the New Jersey compromise is the regulatory reforms, where there is more room for optimism. The new insurance rules are among the strongest enacted by any state. The daunting task faced by individuals and small businesses of finding insurance is now simpler in New Jersey, and private insurers will no longer be permitted to discriminate against those with preexisting conditions” (p. 2970, 1st col.). This, of course, was an assessment based on a generalizing viewpoint. But never does he cite anything reminiscent of Henry and Stearns’ point that “Absent a meaningful quid pro quo for the additional coverage obligation, insurers pulled out.” The point with respect to the ACA is that, at no point did Cantor think it was odd, or unworkable, to institute reforms regarding preexisting conditions without having “universal coverage.”

5. Jersey’s subsidized individual plans ran into trouble

When we look at my anecdote from 1993-94, we will see how ridiculous things could get even as the state erected the details of its new set of reforms. Incidentally, the New Jersey program of outlining a state-defined set of options of individual-insurance plans, with financial subsidies arising out of the unemployment fund, ran aground in about 1997 as the funds for this program dried up. I have sketchy memories of how and when this happened when I was looking into such coverage for myself, but I do know that the method by which you applied, the options offered, and the promise for me in particular as related to my own financial realities never gave me a whole lot of grounds for optimism.


Takeaway: In an elaborate attempt at health-care financing reform in 1993-94, first, no one of relevant authority in New Jersey saw it as necessary to tie reforms regarding coverage of persons with preexisting conditions to some measure requiring “universal coverage.” Second, while the reforms brought a loss of  paying for uncompensated care in the hospital area, attempts to increase coverage in other areas, involving (among other things) a novel form of subsidizing individual health care plans, appear to have had weak or mixed results.

Practical question we are left with: In trying to get individual insurance and being denied coverage due to preexisting condition, what remedies, tools, and procedures are there in the health-care system to help us, especially when some overarching “reform” program is being enacted?

Part 2 to come.

Wednesday, March 28, 2012

Anecdotal Evidenz: A new feature on health-care system issues


[A few edits were done 3/31/12, to paragraphs preceded by **. The original versions weren't unintelligible before being corrected, but they are clearer now.]

I had written a fairly heartfelt, relatively succinct “position statement” on the individual-mandate provision of the Obamacare law that recently had arguments for challenges heard before the U.S. Supreme Court. I held off posting this statement on this blog because, in part, seemingly nearly everybody was making such statements, and I probably had little more original to say than many others. In short, I am against the individual mandate, but I also would tend to be practical about it. If it is upheld, and I (like many others in my position) would have to pay a penalty on my federal tax return for failure to buy my own insurance, I would have contingency plans for how to deal with that. Among them would be finding ways to have business expenses and such to report on my 1040, to offset the penalty to the extent possible.

But one way I can contribute to the debate, if a marginal one, would also be fun: to have a little periodic feature on this blog (generally similar to “Movie break”) called “Anecdotal Evidenz,” the second word spelled Germanic-ly just for fun (and to save on character space). This would be to relate little stories—many from my own experience—to support the following simple notion: If every Shmoe who is not covered by some kind of group insurance in this country should step up to the plate and pay out of his own pocket for insurance, in order to help the financial integrity of an insurance system that aims to cover everybody, do we really have a rational, integrity-redolent health-care system in this country that would inspire (and give a kind of moral/emotional reward for) that kind of socially responsible effort? If I break my ass to pay for health insurance, am I assured that I am helping a large system that is, generally speaking, sensible in its every part, including toward me (and you)? Or are there many anecdotes to show us that the “system” has a lot more problems with it—more numerous and often of huge consequence—than just several million citizens who, putatively ignominiously, decline to get insured?

Let’s look at those anecdotes and see if we can answer the last question yes.


Anecdote 1 (1984): Health insurance at my first post-college job

This is a story I’ve told numerous times over the years, from a sort of forgiving, retrospective bemusement. But it’s funny how the story lends itself to more acerbic analysis as I pose it in this blog series, with an eye to casting skepticism on the apparently not-rare viewpoint that says just a few big tweaks, like getting everybody insurance even if it means compelling them to pay for it, will make the American health-care system “closer to perfect” rather than the jerry-rigged, hypocritical, at times laughable mishmash it really is.

When I started a “permanent” version of my job as a weekend building manager at the Marvin Center, the big student union at my college that was run by a paid staff, it was not much different from the assistant-manager job I’d had as a student for about two years. Short sum: I had a student job at the Marvin Center—a paid job, within the university (not work-study)—from October 1980 to May 1981, September 1981 to May 1982, and from September 1982 through about early May 1984. The portions of this to December 1981 were in the Marvin Center’s game room, a limited facility on one floor; the rest (for about two years) was as an assistant manager (often there were two of us per night we worked, along with a non-student staffer) of the whole building, which spanned about six floors. When I applied for and was made the permanent assistant building manager for Friday, Saturday, and Sunday nights, this was a slightly glorified version of what I’d done as a student—and developed skills and responsibilities in—since January 1982.

The permanent job was 30 hours a week—admittedly part-time, but I think this suited me fine, not only because it followed a busy several years as a student with a double major and with paid work almost all through the school year, but (in terms of longer-term rationale) I was considering taking post-graduate courses, and the Marvin Center job wasn’t meant to be a career thing anyway…and (in terms of ad hoc developments), I would take a second job, outside the university, with the Tennessee Valley Authority’s Washington, D.C., office for about two months in 1985.

George Washington University, my college, had (as far as I recall) a very good health insurance plan. I forget now whether it was an HMO; I don’t think it was a preferred-provider plan; but it was oriented primarily to GW as a large employer. It might have included fee-for-service provisions along with HMO qualities; I can’t remember. And I think it was offered by a large health-insurance company like Blue Cross-Blue Shield. In any event, it was generally a good plan, and it did cover treatments from facilities that were outside what I think the plan suggested you use.

**But if you were part-time, whatever number of hours you worked, you were classified as “20 hours a week” and, per a logic that apparently felt that if 20 hours a week compared to a 40-hour full-timer meant half the work, you paid for half your insurance. So even though my regular schedule was 30 hours a week—and the overall university even had some employees who were classified as full-timers who worked 35 hours a week—I had to pay for half my insurance as if I was a 20-hour worker. So, for a few months early in my tenure as permanent staffer, that’s what I did.

This, by the way, was the first regular health insurance—actually, the first of any kind—I ever had. I never had health insurance when growing up, and never had it during my student years in college. The same was true for my mother and sister. You could do that in those days. (And whenever I had health expenses, I paid out of pocket, basically.)

In those days, I was in my twenties, and generally healthy. One regular health expense I did have was for megavitamins, which I did not consider elective—and which generally still had credibility in the early 1980s, but which after the death of one of this therapy’s main proponents, Carl C. Pfeiffer, M.D., Ph.D., in 1988, would fall into increasing ill repute, to where it is regarded as quackery today (rightly so, for reasons I won’t detail here, but which I have written about elsewhere). I had been getting megavitamins, usually as a yearly matter (you bought everything you would take for a year in one annual visit), since 1979. When I went to my regular appointment with the megavitamin facility I saw in summer 1984, I had a bill for $200+, which was fairly typical. I submitted the bill to the GW health insurance program.

It covered all of $4 of it. About 2 percent.

I thought, if my main health expense that I could foresee as a regular matter was hardly going to be covered by GW’s insurance plan, why should I be paying for half of my premium a month? So I dropped my coverage soon after the megavitamins billing.

**Meanwhile—here is more of a kicker—my sister, who graduated from college in 1985—in Washington, D.C., but from American University—was working (as a sort of low-level producing functionary) for a small media firm run by woman with a strong personality (it produced syndicated radio shows). And since my sister was full-time there, she got a full health insurance benefit, paid for 100 percent by the employer. What was the plan my sister’s deal was offered from? The GW health insurance plan.

See, apparently GW’s employer-dedicated plan not only served the massive GW workforce, but also was shopped (presumably by some sales dorks within the administration of the plan) to outside, smaller companies, whose money (when they signed up) paid for premiums that would be only too welcome to the administrators of the GW plan. (And if I’m not mistaken, my sister’s individual plan had more features than mine had, but I could be wrong on this.)

Of course, as far as my own health-insurance situation was concerned, this would be the first of many, many experiences I would have of an employer doling out a work deal for you that seemed quite good for you in some respects—the nature of the job, the hours, etc.—but the health insurance benefit was more like, as the old story has it, the feather held out by the Indian chief, who ostensibly is seeing, with a show of wisdom, if his young disciple can snatch it from his hand, while the chief is always making sure the disciple doesn’t get it.

This was hardly the worst story I would experience (or hear about) of an employer’s health-insurance offer being less than it should be, even by standards of simple fairness related to your situation. The worst horror story in my life involved a medical-media publisher I worked at in 1993-94.

But this is one good example of how, with all the talk about pooling risk (then or now), the high-minded principles that are “espoused” and the realities seen on the ground have diverged quite a bit at least as far back as about 30 years ago. And this was with a massive employer like GW. GW’s health plan could suck in premiums from companies outside itself to beef up its revenues, but its own employees could not always rely on fair practices for whether they qualified for insurance paid for entirely by the employer.

Sunday, March 25, 2012

Movie break: An artful film on police misconduct and racial attitudes: Touch of Evil (1958)

[This is not meant to comment, indirectly or otherwise, on any current event being covered widely in the media, but is about a longtime American problem, to the extent it relates to anti-Black racism in this country. Commentators for the DVD of this movie certainly so relate it.]

Among films considered the greatest in history, for those who are students of film or just avid film buffs, Citizen Kane (1941) still ranks among the top, and rightly so. This film, loosely based on the life of William Randolph Hearst, was the first and best directorial effort by Orson Welles, a prodigy with earlier noted work on stage and in radio; and even if some viewers today might consider this film a little too old-fashioned for their taste, there’s no denying its historical importance in terms of showing just how much film can do to tell a story: not just unfold a verbal representation, but use imagery in all sorts of ways to help further the story in a way that words alone can’t. Citizen Kane is a compendium of techniques, visual and otherwise—such as deep focus, camera angle, symbolic or simply innovative use of imagery, and visual or sound edits—that would become commonly used in subsequent years, though not always as densely arrayed as here.

Welles is also regarded as a pioneer among artistic film directors. While other directors before him—such as Victor Fleming—put together films that have been long and widely famous and loved for aspects other than the artfulness of their directing (such as The Wizard of Oz and Gone With the Wind), Welles showed how much a director could stretch the bounds of what made a film art just from such a professional's own standpoint. And of course, by the 1970s, the director became not only the central focus (or dramatically more central than before) of what gave a film character and quality—in the public’s eye as well as in the critic’s—but he or she became the basis for a sort of branding of films.

As Welles has been the subject of scholarship and trade-book studies, all his films—and even unfinished works—have received attention. And when you see some of his other films, despite their flaws there’s always something interesting in them—such as The Lady from Shanghai or The Stranger, both from the 1940s—that echo the grand promise of Citizen Kane. If one were to ask which of his films ranked second and third greatest, probably Touch of Evil (1958) would be second-greatest; and though I haven’t seen it, Chimes at Midnight (1966), which for licensing reasons I believe is still hard to get in the U.S., might be third (it was Welles’ personal favorite, and is esteemed by Welles followers, too).

Touch of Evil is one that I’ve watched repeatedly and with relish—it is a sort of “bastard Citizen Kane” in that it is a genre piece that ostensibly was meant, when Universal contracted for it, not to be anything special, but arguably was Welles’ second and last film, brought to some kind of completion, that featured as much directorial playfulness as did Citizen Kane.

But as a sort of thriller with certain commonalities with the western genre, it uses its visual virtuosity to flesh out a story about corruption, ambiguity, social tension—all the sorts of shadowy facts of grassroots American life that were a focus of attention by the 1950s that the rarefied world of Citizen Kane, inevitably, kept above (it had its own types of corruptions, germane to the level of society it covered). So if Citizen Kane was a study of an American “great man,” Touch of Evil was an amplification of how a talented man—an effective local detective—could be corrupt to the point of being a sort of negative center of gravity in an international-border town, enough to make a sudden emergency readily lend itself to nightmarish shenanigans, snaring a wide range of people.

If you are able to buy, rent, or borrow the 50th-anniversary DVD of Touch of Evil, it has all three versions of the movie: the original, deficient release version; the “preview version” that was discovered in about 1975, which includes some important sequences that flesh out and clarify the plot; and the 1998 “re-edit,” which brings the film most in line with a 58-page memo that Welles wrote to Universal (after being barred from post-production) when the film was still in post-production, in order to put the film most in accord with how Welles had aimed to make it (in some cases, just to make the story more coherent), though he knew it wouldn’t make the film as close to perfect as he could have gotten it if he was in editorial control of it all along.

The different versions have the options of commentary from different parties to its making or scholarship: Rick Schmidlin, the producer in charge of the 1998 re-edit, and Charlton Heston and Janet Leigh, who both starred in the film; Jonathan Rosenbaum and James Naremore, who both wrote books on Welles and comment very edifyingly on the preview version (Naremore is especially heartfelt); and F.X. Feeney, another Welles scholar who comments on the release version. There are also mini-documentaries including comments by various persons including Walter Murch, the famous film editor (involved in many of Francis Ford Coppola’s productions, and such more recent films as Cold Mountain), who technically edited the re-edit of Touch of Evil.

For those truly interested in Welles and this film in particular, and who have the time, this is perhaps the richest DVD out there on any Welles work, in explaining the history and virtues of this film. Indeed, the story of this film is complex—showing how Welles was ahead of his time as a film artist, within a producing studio that had little use for such a director—and the college-like scholarship on this film implies it has as much substance as many a work of canonized American literature that students might read.

Thursday, March 22, 2012

Movie break: Served. Witnessed. Have a Nice Day: The Coen brothers’ edifying portrayal of practicing lawyers in their movies, Part 2 of 2


[I erred; it turns out there are books related to the Coens—see this Amazon page (this does not constitute an endorsement of the books). However, they seem generally like “fanboy” books—certainly the “Dude” one does—and not scholarly or otherwise judicious studies. Anyway, I haven’t read them, so perhaps my Coens analysis is a little poorer for this.]

[Also, this entry touches on issues of tolerance that are, in current events, inflamed amid the Dharun Ravi case, an anti-Semitic attack in France, and a racially charged shooting case in Florida. Perhaps it would be helpful to read the last subsection here (“What lesson?”) before the next-to-last subsection ("The nightmarish senior partner...").]

[This entry has been edited 3/23/12.]

Intolerable Cruelty (2003)—the epitome of the Coens’ skepticism of lawyers

Having to represent yourself in an important legal matter—as I’ve done a number of times—is typically a lonely affair, and is like finding a snake in your sock drawer: it’s something of a horror to have to deal with it, and you don’t relish doing it again, but the more often you do it, the better you get at it, and the more you somehow welcome the experience.

But you need some therapeutic help amid such a process, no matter how developed your confidence. So if ever you wanted to see a movie that, for sheer laughs in a strongly satiric vein, would take your worried mind off a pending legal matter—because it gives some helpful laughs about the legal profession—Intolerable Cruelty is one of the best choices.

According to a DVD-extra interview of the Coens, this film (IC for short) began as a writing project they did for Universal, starting apparently in about 1994. Whatever complicated route it took from there, it ended up with screenwriting credits for the Coens and two other people, and Brian Grazer (producer of A Beautiful Mind and other notable films) is one of the producers. The end product is a pitched satire of the divorce-lawyer world, yet it comments—in the Coens’ way of adapting, and adapting to, certain standard genres—on “screwball comedy,” with (as Ethan notes) the softheaded male and the hardheaded female. Such films might have starred Cary Grant, on whom George Clooney’s wardrobe (and other features) is modeled in IC. So instead of the relatively mild humor of screwball comedy, you get laughs as from dark satire.

Meanwhile, the charisma (and tooled acting) of the two stars (Catherine Zeta Jones plays the woman set against Clooney’s character) and a certain burnished, Technicolor look to the film make it an odd mix of sleek entertainment seemingly aimed at the mainstream and a sly, dark commentary. And it is the only Coens film that focuses mainly on the legal world, whereas the legal world makes up only minor (if usually important) parts of many of their other films.

The premises of IC, and even many of its character names, have a certain “formulaically satirical” quality to them. Clooney plays Miles Massey, a next-to-top partner in the firm Massey Myerson Sloan and Guralnik, LLP, which seems to specialize in matrimonial law. After an introductory scene in which a putatively sleazy producer of soap operas (“Donovan Donaly”) is cuckolded by his wife, she turns up in Miles’ office for an initial consultation, which is a tour de force of a scene depicting a disingenuous attorney willfully misinterpreting and redefining what he half-hears of her story—even while she half-moves to correct him—in order to shape it to a case in which he feels he can win uncompromisingly.


Storyline shows serial golddigger “matched” (in two ways) by ruthless attorney

Then the main plotline involves Miles’ representing Rex Rexroth, a mini-mall developer whose wife Marilyn, played by Zeta Jones, is suing him for divorce; the outcome of the two men’s “initial consult” is a summing up to Rex Rexroth by Miles: “So you propose that, in spite of demonstrable infidelity at your part, your unoffending wife should be thrown out on her ear?” Rex smiles like a kid: “Is that possible?” Miles turns briefly to think. He responds somewhat ironically, “It’s a challenge.”

He then uses in his own way the same crass private investigator that Zeta Jones has used to get videotape on Rex shacking up with a floozy at a motel: Gus Petch, a tough, competent, self-promoting sort who follows his prey into their trysting place with big video camera recording them and chants with gratification, “I’m gonna nail your ass! I’m gonna nail your ass!” (This phrase becomes playfully echoed by different people in different contexts in the film.)

In a specially requested way for Miles, Gus Petch (played by comic and actor Cedric the Entertainer) will break into Marilyn and Rex’s house to photograph her address book; it is this maneuver that has Miles’ assistant Wrigley [sp?] ask him, “Couldn’t you get disbarred for that?”—to which Miles has a smooth, self-serving answer. It is on the basis of Gus’s finding info on the amusing concierge of a swanky European inn that Miles is able to win in a big way for Rex in the divorce trial, with no money to Marilyn, in a nicely wrought scene in court: the concierge turns out to present evidence on Marilyn’s having set up Rex to be married to her until such time as she can divorce him for adultery and get some of his estate. And as we find, Marilyn’s poolside girlfriends (who greet with traded air-kisses) mostly seem like serial marriers-and-divorcers out only for their husbands’ money.

The second half of the film, after Miles wins for Rex, shows Marilyn trying to get even with Miles—who, as it happens, despite his apparent cynicism about marriage, falls in love with her—by making Miles (with a pang of jealous yearning) want to marry her after she pretends to marry an oil millionaire, who is actually a soap-opera actor to whom Marilyn is referred by the now-destitute Donovan Donaly. She marries Miles (in a fun scene at the apparently real-life Wee Kirk o’ the Heather, a Scottish-themed wedding chapel in Las Vegas), immediately after which she gets the wheels turning on divorcing him to get his money—and this after importantly scrapping Miles’ invention, a form of prenuptial agreement that is supposedly impossible to “penetrate” once it is in force. She tears up the “prenup” as if to demonstrate her love and trust in him, but ends up using this voiding of the prenup as a means to have his estate be exposed to her courtroom attempt to get at least half of it.

As I write this, I am reminded of how densely developed a satirical script this is: all sorts of comical elements interwoven, in a story that seems to describe such selfish, cynical people that some viewers today might find it so dark as to be wildly unrealistic. And yet as I’ve said, today, it seems that so much more is subject to satirical treatment in this country, post-2008, that IC seems about as “wise about where we are now” as an old Mad magazine.


The nightmarish senior partner brings home the most trenchant remarks about cynical law

However much the Coens themselves devised in this film the barbed criticism of attorneys as disingenuous strivers with virtually no conception of love, this film certainly goes about as far as you can go to make these points without tipping over into completely implausible situations and characterizations (and maybe some would find it this way anyway). But their most pointed addressing of the nature of practiced law in this country—or the worst extremes to which it can be taken—is perhaps in Miles’ visits to the “senior partner’s” office, that of Herb Myerson (actually, one of the “visits” is just a nightmare image of Miles’). There are three such scenes in the film, the first two being pat theme-setters of a sort; the third actually is an important locus of a plot development.

Herb Myerson, well into his eighties, is a cadaverous old coot hooked up to enough tubes and beeping machines that he seems as if he’s in an ICU, in a shadowy office that almost seems like an especially murky Mafioso’s lair. (It is interesting to compare this fantasy center of power—a sort of “Wizard of Oz behind the curtain”—with the odd redoubt of Marshak, the senior rabbi of the community temple in A Serious Man (2009), who seems an occult, benign, bearded elder hemmed in behind a mysterious museum-like array of Judaica, Hebraica, and apparent medical-lab displays and other portentous odds and ends. Marshak’s “emeritus official’s” main duty, to greet the newly bar-mitzvah’d boys, seems per the movie’s whimsy to entail little more than clearing his throat noisily and, in the case of the movie’s boy-protagonist, returning his confiscated transistor radio and showing he’s learned the names of the members of a rock group the boy has shown interest in.)

In the two theme-setter scenes in IC, Myerson congratulates Miles for his value to the firm, with Myerson listing the stats on Miles’ accomplishments, almost as if the firm was a sleazy, boiler-room affair interested only in cold end results: X many motions for summary judgments sought, Y granted; and so on…and umpty-ump lunches charged. Myerson has a New York area Ashkenazi-Jewish accent (e.g., “firm” is “foim”) that would almost seem an anti-Semitic slur if it weren’t for the Coens being the sly perpetrators of this; he seems like the kind of cynical elderly Jew whom some would scorn as “the type that gives his ethnic group a bad name,” or “that kike who gypped me in X circumstance.” It’s as if the Coens are saying the “heart of darkness” of such a firm is a lot of people’s stereotype of a ghoulish old Jew who is so greedy, he is salivating over the balance sheet even when half dead.

In the third Myerson scene, the old man now seems less enthralled to good news of balance-sheet numbers and is keen on confronting a chastened Miles, who has been thrown for a loop by Marilyn’s proceeding to divorce him, with Miles confessing to the fact that for the first time in his life, “I don’t know what to do; I’m a sitting duck!”

Myerson responds furiously, scorning “all your goddam love, love, love,” and arguing, “This firm deals in power! This firm deals in perception! This firm cannot prosper nor long endure [note the similarity to language in the Gettysburg Address] if it’s perceived to be dancing to the music of a hurdy-gurdy! [his pronunciation: “hoidy-goidy”].” Then, in what may be the main satirical point the Coens make about how bad a law firm can get, he says (some of this may be paraphrased), “I’m going to tell you something about the goddam law! We honor the law! We serve the law! And sometimes, counselor, we obey the law. But counselor, this is not one of those times.”

And we cut to a scene with another wacky character, a stocky hit man named Wheezy Joe, who confers with Miles and Wrigley on a plan to kill Marilyn, while he uses what seems a rescue asthma inhaler with some frequency. The denouement of the movie involves Miles and Wrigley’s trying to stop Wheezy Joe from assassinating Marilyn once they find she has inherited Rex Rexroth’s estate—he has suddenly died, and had never revised his will from when he’d still been married to her—and later Wheezy Joe dies in a bizarre mishap that is a typical violent episode in Coens films. Miles and Marilyn eventually have a reconciliation…and as screwball comedy has it, there is a sort of neat happy ending, though in this case we obviously have reservations about the morals of the individuals comprising the happy couple.


What lesson?

If people find the Coens too glibly cynical at times, perhaps the best way to look at something like “the nightmare Jew” at the heart of a law firm is to compare these capable filmmakers to another area of American literature, not that of 1870-1915 naturalism and satire, as I argued in the first part of this blog entry. That area is William Faulkner, whose demanding works contained such unhappy elements as a mentally retarded person being the “reliable narrator” for part of a story (The Sound and the Fury), violence and dislocation shaping uneducated people’s lives (As I Lay Dying), and plenty else that didn’t leave things simple for our unruffled consideration. In The Sound and the Fury, Jason Compson, who seems the sanest character to whom a long chapter’s focus is given, seems racist in a way that might make us laugh despite ourselves: what does Faulkner want us to believe about this?

Where was the moral center of some of his works? How did we know which character to believe in?

We know from, say, his having the servant Dilsey be the center of decency and everyday sobriety in The Sound and the Fury that he required us to seek out, amid the damage, decline, and distortion of American life, where the true keepers of the flame of “traditional values” were. In the introduction to the section on Faulkner in the 1979 edition of The Norton Anthology of American Literature (the volume covering from about 1865 to modern day), it is noted that Faulkner can present a stereotype in a character, but his work is such that you are invited to see the reality around or beyond it. [See end note.] His type of work of art may not have made this easy, but the dislocation, the sensational portraits of all kinds of violence, helped make discovery of the real “heart of the work” all the more precious and credible.

More generally, art can help us face, and resolve, stereotypic thinking in two ways: (1) in providing a space where all boundaries are dissolved, and we share (for a relaxed time) in all “other types”—such as with rock ’n’ roll (either as fans or practitioners), where everyone can be race-blending and/or gender-bending and/or a “whatsit?”—like a Little Richard, a David Bowie, a Mick Jagger, or a Lady Gaga. The other way is (2) to somehow facetiously embrace and draw implications out of a stereotype that is so bluntly ludicrous that we start to see how ridiculous stereotypic thinking is in general—such as in Sacha Baron Cohen’s Borat (2006).

But the Coens don’t make this sort of aim simple; they’re, you might say, college-level filmmakers. In Intolerable Cruelty, Miles gives a speech to a convention of matrimonial lawyers, after he has married Marilyn, that is the most sincere-sounding “soliloquy” given in the movie—though obviously we are meant to see it as quite ironic to be delivered to such attorneys. Later, Miles returns more long-term to something of his more reprehensible side. Does the movie’s real “message” lie in this speech? Or in the seemingly slur-like characterization of Herb Myerson?

That’s the sort of challenge of Coens movies. Where is the restoration of sanity? Where is the moral center? Is a given fleeting instance of either of these enough for the movie? Maybe that’s up to all of us individually to decide.

###

End note


In The Norton Anthology of American Literature, vol. 2 (New York: W.W. Norton & Company, 1979):

“Faulkner’s literary form is distinctly appropriate to the sense of desperate urgency with which he confronts the world he envisions. The contortions into which he twists his materials…and the sensationalism of his themes and formal effects…define Faulkner’s world. It is a world in which ‘despair’ and ‘doom’ are recurring motifs because social and moral orders prove to be founded on racial exploitation and violence; …[and] social and moral traditions are threatened with enervation and perversion, and human destiny is faced with the alternatives of annihilation or apocalypse.” (p. 1760)

“His fiction, not his public pronouncements, remained the most sensitive register of his anguished recognition of the power and human capacities of the black people. There he had ‘explored,’ in the words of the black novelist Ralph Ellison, ‘perhaps more successfully than anyone else, white or black, certain forms of Negro humanity.’ In portraying American Negroes he ‘had been more willing perhaps than any other artist to start with the stereotype, accept it as true, and then seek out the human truth which it hides.’” (p. 759)

Sunday, March 18, 2012

What kind of “culture-changer”? The Dharun Ravi verdict

[This entry may be edited for errors, and/or added to, within coming days.]

There is no doubt that the criminal trial of Dharun Ravi, for alleged bullying of fellow freshman student and dorm roommate Tyler Clementi, deals with serious issues in a complex nexus of incidents and relevant concepts. Clementi, after all, committed suicide in later September 2010 within a few days of discovering that his roommate Ravi had been exposing some video representation of intimate encounters Clementi had with a transient mate, with use of a webcam and associated Twitter posts alerting other fellow students to what was going on. This sums up some of the facts of the case, as presented in newspapers as they were reporting on the trial (and as it’s been noted, the facts presented in the trial were largely or entirely not in dispute).
I’ve followed the trial coverage pretty closely. Before the jury rendered its verdict, I felt that a verdict of guilty for felonious “bias intimidation” was a little too strong for this case.
After closely looking at the results, and respecting the intricate set of findings the jury made on numerous counts and in accordance with what has been repeatedly called a murky New Jersey statute [1], the one used regarding bias intimidation, I would like to present a criticism that I don’t mean to sway the juridical proceedings that are apt to go on (Ravi’s attorney plans an appeal, for one thing).

Professional assessments of the trial

A Rutgers law professor, Louis Raveson, said this “was an incredibly important case, not just for New Jersey but for the country” (“Law experts say verdict breaks new ground,” The Star-Ledger [March 17, 2012], p.8). An analysis a day later says “[t]he verdict in the Tyler Clementi case could be a culture-changer, in more ways than one” (“Ravi verdict becomes ‘a cautionary tale,’” The Star-Ledger [March 18, 2012], p. 10).
A little bit more caution about this case was presented prior to the verdict by one notable analyst who hailed the verdict a day or two later. On about March 14, “The bias intimidation statute is one of New Jersey’s most important and effective criminal and civil rights laws,” said Steven Goldstein, chairman of an advocacy group, Garden State Equality. “The question isn’t whether it’s worthwhile. The question is whether it applies to this particular case” (quotes from Mark Di Ionno’s column, “N.J. bias law on trial alongside Ravi,” The Star-Ledger [March 15, 2012], p. 12). “Goldstein said he and his organization were reserving comment on that question,” the column added.
On the day of the verdict, Goldstein effused, “This verdict, combined with New Jersey’s new anti-bullying law, sends a powerful signal across the state and, frankly, across the country that the days of a kids-will-be-kids defense to brutal bullying are now over, and thank God for that” (quote from “Jury: It was hate[.] Ravi convicted of spying, bias against gay teen[.] 10-year term possible after rejected plea,” The Star-Ledger [March 17, 2012], p. 6).

Wrong concept was central?

People have hailed the verdict as a groundbreaker. But I think the problem is that the wrong concept was grappled with in determining how guilty Ravi should be considered, and for what.
The verdict’s biggest bite was in assessing guilt in line with the bias intimidation statute, which, it’s been noted, increases the severity of the crime by a degree, and adds severity of penalty (see Di Ionno’s column, p. 18, fifth column, sixth and seventh full paragraphs; or if accessed online [and if same as print version], 15th and 16th paragraphs). The statute also doesn’t articulate what it means and intends too well; the judge in this case said, “I’ve read the statute and read the statute more times than I can count. […] The statute, to me, is muddled. If I had written it, I would have written it differently. […]” (quoted in Di Ionno, p. 15). The different aspects of the statute in line with which the jury voted can be seen in a table in The Star-Ledger (March 17, 2012), p. 6.
I think the fundamental problem is that a lot of work—by the prosecution, by the defense to the extent it tried to counter the prosecution, and by commentators—has been done to see how, whether, and how well the bias intimidation statute addresses a case of invasion of privacy that led to a young student’s suicide. But to me the central concept that needed addressing was to what measurable extent the Internet exposure was a factor for which Ravi (and anyone who might do something similar) could be held accountable.
I’ll admit this proposed idea is almost tantamount to making a finding of “bias intimidation” with respect to just the defendant’s apparent state of mind (see criticisms in “Law experts say verdict breaks new ground,” The Star-Ledger [March 17, 2012], p.8, especially that of defense attorney Lawrence Lustberg—e.g., “[I]t is unprecedented for a conviction to be based on the state of mind of the victim”).
But my position is importantly different.
When this case first came to light and started being handled by a prosecutorial team, I thought some new ground could be broken with regard to the specific peculiarities of Internet exposure. And it would seem that the aim was carried through, to judge from many people’s hailing this case as a way “bullying” has been dealt with.
But the problem, to me, is that one key aspect of what has made this “bullying”—the Internet exposure—has been taken for granted without really examining this concept—or how this instance measures up—for its merits as much as there’s been a lot of talk about how the bias intimidation statute applies. In fact, from the Star-Ledger’s representation of the statute (i.e., what I’ve seen, which is not be all), there seems to be no explicit mention in the statute of Internet use at all.
Why is this difference important?

I’m not insensitive to the issue of college-freshman bullying

Let me pause and make an important point from my personal experience. I don’t think Ravi is a cad to be excoriated to the heavens for what he did; oh yes, he was rude to say the least, and reckless, and biased to some extent, to regard Clementi and the latter’s having a date over to their room as he did.
I understand having a college roommate who can’t stand you in some profound way. And I guess you could say I was bullied in some way by a certain roommate, though we didn’t use such terms about such a thing then.
In my freshman year, at George Washington University, I was among largely Jewish students for the first time in my life. [Background note: One future book I have long had gestating has the working title, The Jewish Experience in America from a Protestant Perspective, and aims to be based on my personal experience, reading, and research, and to take a balanced look at a complex phenomenon. It will not be uncritically philo-Semitic. And some of my recent blog comments are rather like spring robins hailing the more nuanced, balanced, and good-humored content of this manuscript, whenever it gets more finished.]
This was a cold shock. The first week of freshman year, easily the most scaldingly memorable for many students, had its share of defining culture clash. I still talk about one of my roommates today, more than 30 years later: Alan L., who was a full year younger than me and was so hyperopinionated, grotesquely self-centered, and rude—e.g., criticizing you for things of mere taste or personal habits that he didn’t share, such as drinking milk (not a point of religious precepts for him, just a matter of taste), or making fun of you for the tiniest passing peccadilloes—that I’ve thought of him this way: If one person was to be the basis for an anti-Semitic viewpoint, he would have been it for me, without apology. But of course you don’t just base your views of an ethnic group on one person.
With him, what I faced wasn’t an “anti-gay” attitude as Clementi did; it was “anti-goy.” I had never experienced this before. I would realize I was spoiled in coming from the exurban area I did, having graduated from Vernon Township High School, where although I was a bit alienated from fellow students for personal reasons I would take some responsibility for, at least I was accorded some general respect. I was ranked fourth in my class; I knew everyone in my class of about 180 by name or at least by face.
At GW, I was treated like I had to earn my wings—not only as a student (to some extent), but even “as a person”—all over again.
Regarding me and Alan L., there had to have been talk (such as between me and an RA, a “resident assistant”) about possibly changing rooms, but it was never done. Such was not typically accommodated in those days. I remember an RA talking to me in a friendly way—he was a Jewish male named Rich Miller from upstate New York, and very nice—and one of the ways he got on the same page with me about Alan L. was remarking with a sort of mild awe on Alan’s heavy Long Island accent (Alan had a “South Shore Long Island tough-stuff” accent, almost a caricature, that I will never forget—it wasn’t what you would expect among students at a private university).
A lot more can be said about this. Alan and I not only spent all of freshman year in the same room (with another roomie, Randy K., who became a doctor), but Alan and I spent half of sophomore year in another room—long story why.
And I think I did grow for the better in some ways from this cold experience—but my resentment 30+ years later shows that not all was positive in what I took away from it.
So, in short, I understand where Clementi was coming from in finding himself in a scalding situation with Ravi. And, sure, Clementi could have seen Ravi pretty much as I saw Alan L. 30+ years ago.

Today’s game-changer: the Internet

But what we didn’t have in 1980 was the Internet, and its potential public exposure.
And Clementi committed suicide, as I did not.
What may well have made the difference?
Not so much Ravi’s “being a bully” as addressed (however deeply) in state statute, but Ravi’s having a video’d encounter by Clementi accessible via some Internet alert. Clementi could have thought fearfully, “Did hundreds or thousands see this?”
That’s what could have shocked and demoralized him so much that he would jump off the George Washington Bridge.
After all, if mere bullying was the issue, Clementi and Ravi had only a very few weeks together. Maybe longer time was needed to determine whether there was a real bullying relationship. I had three semesters of Alan L.; and of course one semester was enough to know where I stood with him (not at all comfortably).
In short, isn't the Internet aspect the key factor that needs to be squared with in this type of case? Oughtn’t the legal professionals who were involved have sought to determine how much Internet exposure was really going beyond the acceptable limit?

Parsing “cyberbullying”; and generational differences

Indeed, when “cyberbullying” is talked about, the bullying aspect attracts sympathy and attention. It used to be, and to some extent still is, that bullying meant being threatened with being beaten up, or having your possessions harmed, or the like. It meant an appreciable threat of physical harm.
People have seemed to think that demeaning someone via Internet exposure made it an obvious candidate for a reading of “bullying.”
In given cases, this could well be, but is it always?
Part of the problem is that “kids” who are so well versed in electronic-media use—communicating through Facebook, Twitter, instant messaging, texting—have seen no problem saying all kinds of things through these avenues. This is the equivalent of yack, yack, yacking on the phone that was the primary way kids gossiped pre-Internet. It is, further, quite plausible to assume that often kids will give a catty edge to their comments when able to do it online, especially when this mode has the potential to broadcast a message so widely. They don’t think of the ramifications of someone taking a catty remark well beyond what was intended.
This is not simply to criticize them. It is to show a generational difference.
Forty years or so ago, when rock ’n’ roll was relatively new, parents dismissed it as “jungle music” and as apt to pervert the young. Today, you can’t have some newscast or newsmagazine without a story related to some star who has made his or her career in this area. Styles of communicating via electronic means could experience a similar growth in acceptance. What may seem intolerably catty or nasty in tweets today may be unremarkable 20 years from now.
But of course, laws reflect not just one generation’s preferences over, or in spite of, another’s; they are a way for society to outline acceptable conduct that respects the sensibilities of the widest number of people (this is the ideal, which is often not attained in actual legislation). So, some laws may protect the rights of some—especially younger people—in ways that seem to condone what older people might not approve of. For instance, to take a hot-button topic, a pro-abortion law has protected the rights and preferences of some groups that may have been considered typically those of “the young generation” in about 1973 (the time of Roe v. Wade), yet it thereby sanctions behavior that some among the older generations (who were still alive) condemned.
When it comes to cyberbullying, the aim should not simply be to slap young people in the face for some mere general idea of what older people find appalling—such as bitchy communicating exposed for the world to see. It should be to determine what type of communications, with regard to what context, and with what roughly ascertainable exposure, can be tolerated, and what can’t. And of course, the limits outlined in one decade may be changed in a later one.

Internet ambiguities more important than a deficient statute

In short, this case should have focused more on the ambiguities of Internet communications, and less on a statute that seems too confused even to address some more concrete, “traditional” examples of bullying. How many people, as it could reasonably be estimated, were exposed to Ravi’s tweets about Clementi? How much could this have been expected to matter?
Just having printouts of a plethora of communications doesn’t simply and loudly mean “these kids were going nuts with an unacceptably high-handed and at times biased approach to Tyler Clementi.” Sure, having hard-copy evidence, and a clarifying timeline, are important in making a case. And of course, these helped in making the determinations that were easiest for the jury to reach—on whether Ravi destroyed evidence, etc. (“Jurors say they were open-minded, but evidence was strong,” The Star-Ledger [March 17, 2012], p. 7, especially from juror Bruno Ferreira: “…Nothing means we could be personally biased toward the defendant. You have to look at all the facts and the evidence. That’s why you have 24 counts guilty and 11 not guilty.” And: “Ferreira ‘said decisions on the charges of witness tampering and invasion of privacy were “easy” and “cut and dry” ’” [p. 7, columns 2 and 6].)
As juror Ferreira also said, “We could not prove that [Ravi] did it purposely to intimidate Tyler. […] We couldn’t prove that he did it knowing that Tyler was going to get intimidated because of his sexual orientation.
“But we came to the conclusion where, with the evidence that was provided by the state and the defense, […] that it showed that (Clementi) did…have a reasonable belief that [Ravi] wanted to intimidate him because of his sexual orientation” (“Jurors say…,” The Star-Ledger [March 17, 2012], p. 7; non-bracketed editorial changes in original).
If this was such a watershed case, as news reports have suggested, and it can be a “culture-changer,” then consider this: in capital murder cases, juries need to find guilt beyond a reasonable doubt. In this case, Clementi actually died, though Ravi was not on trial for his death, nor do I think many people consider Ravi to have been a necessary and sufficient cause for Clementi’s death. But what does it mean to say “(Clementi) did…have a reasonable belief that [Ravi] wanted to intimidate him because of his sexual orientation”? How can a jury assess that? Simply because the clumsy statute seems to require it?
Why not look more closely on the nature of the Internet communications involved? Don’t take for granted that if someone alerted others to a homosexual encounter via tweets that this automatically means (in some old sense) bullying. Instead, what ambiguities about the electronic/Internet kind of communication, and what kind of mitigating or, on the other hand, “inculpating” context, do we also need to look at?

###

[1] See, for example, Mark Di Ionno’s column, “N.J. bias law on trial alongside Ravi,” The Star-Ledger (March 15, 2012), pp. 15, 18. See also “Law experts say verdict breaks new ground,” The Star-Ledger (March 17, 2012), p.8.