"For a pure sense of being tumultuously alive, you can't beat the nasty side of existence. I may not have been a matinee idol, but say what you will about me, it's been a real human life!" Mickey Sabbath, in Philip Roth's Sabbath's Theater

"Talent hits a target no one else can hit; genius hits a target no one else can see." Arthur Schopenhauer

“Why does everything you know, and everything you’ve learned, confirm you in what you believed before? Whereas in my case, what I grew up with, and what I thought I believed, is chipped away a little and a little, a fragment then a piece and then a piece more. With every month that passes, the corners are knocked off the certainties of this world: and the next world too. Show me where it says, in the Bible, ‘Purgatory.’ Show me where it says ‘relics, monks, nuns.’ Show me where it says ‘Pope.’” –Thomas Cromwell imagines asking Thomas More—Wolf Hall by Hilary Mantel

My own favorite posts to get started: The Self-Righteousness Instinct, Sabbath Says, Encounters, Inc., and Muddling through Life after Life.

Thursday, April 10, 2014

Why I Won't Be Attending the Gender-Flipped Shakespeare Play

The Guardian’s “Women’s Blog” reports that “Gender-flips used to challenge sexist stereotypes are having a moment,” and this is largely owing, author Kira Cochrane suggests, to the fact that “Sometimes the best way to make a point about sexism is also the simplest.” This simple approach to making a point consists of taking a work of art or piece of advertising and swapping the genders featured in them. Cochrane goes on to point out that “the gender-flip certainly isn’t a new way to make a political point,” and notes that “it’s with the recent rise of feminist campaigning and online debate that this approach has gone mainstream.”

What is the political point gender-flips are making? As a dancer in a Jennifer Lopez video that reverses the conventional gender roles asks, “Why do men always objectify the women in every single video?” Australian comedian Christiaan Van Vuuren explains that he posed for a reproduction of a GQ cover originally featuring a sexy woman to call attention to the “over-sexualization of the female body in the high-fashion world.” The original cover photo of Miranda Kerr is undeniably beautiful. The gender-flipped version is funny. The obvious takeaway is that we look at women and men differently (gasp!). When women strike an alluring pose, or don revealing clothes, it’s sexy. When men try to do the same thing, it’s ridiculous. Feminists insist that this objectification or over-sexualization of women is a means of oppression. But is it? And are are gender-flips simple ways of making a point, or just cheap gimmicks? 

Tonight, my alma mater IPFW is hosting a production called “Juliet and Romeo,” a gender-flipped version of Shakespeare’s most recognizable play. The lead on the Facebook page for the event asks us to imagine that “Juliet is instead a bold Montague who courts a young, sheltered Capulet by the name of Romeo.” Lest you fear the production is just a stunt to make a political point about gender, the hosts have planned a “panel discussion focusing on Shakespeare, gender, and language.” Many former classmates and teachers, most of whom I consider friends, a couple I consider good friends, are either attending or participating in the event. But I won’t be going.

I don’t believe the production is being put on in the spirit of open-minded experimentation. Like the other gender-flip examples, the purpose of staging “Juliet and Romeo” is to make a point about stereotypes. And I believe this proclivity toward using literature as fodder to fuel ideological agendas is precisely what’s most wrong with English lit programs in today’s universities. There have to be better ways to foster interest in great works than by letting activists posing as educators use them as anvils to hammer agendas into students’ heads against.

You may take the position that my objections would carry more weight were I to attend the event before rendering judgment on it. But I believe the way to approach literature is as an experience, not as a static set of principles or stand-alone abstractions. And I don’t want thoughts about gender politics to intrude on my experience of Shakespeare—especially when those thoughts are of such dubious merit. I want to avoid the experience of a gender-flipped production of Shakespeare because I believe scholarship should push us farther into literature—enhance our experience of it, make it more immediate and real—not cast us out of it by importing elements of political agendas and making us cogitate about some supposed implications for society of what’s going on before our eyes.

Regarding that political point, I see no contradiction in accepting, even celebrating, our culture’s gender roles while at the same time supporting equal rights for both genders. Sexism is a belief that one gender is inferior to the other. Demonstrating that people of different genders tend to play different roles in no way proves that either is being treated as inferior. As for objectification and over-sexualization, a moment’s reflection ought to make clear that the feminists are getting this issue perfectly backward. Physical attractiveness is one of the avenues through which women exercise power over men. Miranda Kerr got paid handsomely for that GQ cover. And what could be more arrantly hypocritical than Jennifer Lopez complaining about objectification in music videos? She owes her celebrity in large part to her willingness to allow herself to be objectified. The very concept of objectification is only something we accept from long familiarity--people are sexually aroused by other people, not objects.

I’m not opposed to having a discussion about gender roles and power relations, but if you have something to say, then say it. I’m not even completely opposed to discussing gender in the context of Shakespeare’s plays. What I am opposed to is people hijacking our experience of Shakespeare to get some message across, people toeing the line by teaching that literature is properly understood by “looking at it through the lens” of one or another well-intentioned but completely unsupported ideology, and people misguidedly making sex fraught and uncomfortable for everyone. I doubt I’m alone in turning to literature, at least in part, to get away from that sort of puritanism in church. Guilt-tripping guys and encouraging women to walk around with a chip on their shoulders must be one of the least effective ways to get people to respect each other more we've ever come up with.

But, when you guys do a performance of the original Shakespeare, you can count on me being there to experience it. 


Update:

The link to this post on Facebook generated some heated commentary. Some were denials of ideological intentions on behalf of those putting on the event. Some were mischaracterizations based on presumed “traditionalist” associations with my position. Some made the point that Shakespeare himself played around with gender, so it should be okay for others to do the same with his work. In the end, I did feel compelled to attend the event because I had taken such a strong position. Having flipflopped and attended the event, I have to admit I enjoyed it. All the people involved were witty, charming, intellectually stimulating, and pretty much all-around delightful. 

But, as was my original complaint, it was quite clear—and at two points explicitly stated—that the "experiment" entailed using the play as a springboard for a discussion of current issues like marriage rights. Everyone, from the cast to audience members, was quick to insist after the play that they felt it was completely natural and convincing. But gradually more examples of "awkward," "uncomfortable," or "weird" lines or scenes came up. Shannon Bischoff, a linguist one commenter characterized as the least politically correct guy I’d ever meet, did in fact bring up a couple aspects of the adaptation that he found troubling. But even he paused after saying something felt weird, as if to say, "Is that alright?" (Being weirded out about a 15 year old Romeo being pursued by a Juliet in her late teens was okay because it was about age not gender.) 

The adapter himself, Jack Cant, said at one point that though he was tempted to rewrite some of the parts that seemed really strange he decided to leave them in because he wanted to let people be uncomfortable. The underlying assumption of the entire discussion was that gender is a "social construct" and that our expectations are owing solely to "stereotypes." And the purpose of the exercise was for everyone to be brought face-to-face with their assumptions about gender so that they could expiate them. I don't think any fair-minded attendee could deny the agreed-upon message was that this is a way to help us do away with gender roles—and that doing so would be a good thing. (If there was any doubt, Jack’s wife eliminated it when she stood up from her seat in the audience to say she wondered if Jack had learned enough from the exercise to avoid applying gender stereotypes to his nieces.) And this is exactly what I mean by ideology. Sure, Shakespeare played around with gender in As You Like It and Twelfth Night. But he did it for dramatic or comedic effect primarily, and to send a message secondarily—or more likely not at all.

For the record, I think biology plays a large (but of course not exclusive) part in gender roles, I enjoy and celebrate gender roles (love being a man; love women who love being women), but I also support marriage rights for homosexuals and try to be as accepting as I can of people who don't fit the conventional roles.

To make one further clarification: whether you support an anti-gender agenda and whether you think Shakespeare should be used as a tool for this or any other ideological agenda are two separate issues. I happen not to support anti-genderism. My main point in this post, however, is that ideology—good, bad, valid, invalid—should not play a part in literature education. Because, for instance, while students are being made to feel uncomfortable about their unexamined gender assumptions, they're not feeling uncomfortable about, say, whether Romeo might be rushing into marriage too hastily, or whether Juliet will wake up in time to keep him from drinking the poison—you know, the actual play. 

Whether Shakespeare was sending a message or not, I'm sure he wanted first and foremost for his audiences to respond to the characters he actually created. And we shouldn't be using "lenses" to look at plays; we should be experiencing them. They're not treatises. They're not coded allegories. And, as old as they may be to us, every generation of students gets to discover them anew. 

We can discuss politics and gender or whatever you want. There's a time and a place for that and it's not in a lit classroom. Sure, let's encourage students to have open minds about gender and other issues, and let's help them to explore their culture and their own habits of thought. There are good ways to do that—ideologically adulterated Shakespeare is not one of them.





Sunday, March 30, 2014

The Better-than-Biblical History of Humanity Hidden in Tiny Cells and a Great Story of Science Hidden in Plain Sight

            Anthropology enthusiasts became acquainted with the name Svante Pääbo in books or articles published throughout the latter half of this century’s first decade about how our anatomically modern ancestors might have responded to the presence of other species of humans as they spread over new continents tens of thousands of years ago. The first bit of news associated with this unplaceable name was that humans probably never interbred with Neanderthals, a finding that ran counter to the multiregionalist theory of human evolution and lent support to the theory of a single origin in Africa. The significance of the Pääbo team’s findings in the context of this longstanding debate was a natural enough angle for science writers to focus on. But what’s shocking in hindsight is that so little of what was written during those few years conveyed any sense of wonder at the discovery that DNA from Neanderthals, a species that went extinct 30,000 years ago, was still retrievable—that snatches of it had in fact already been sequenced.

Then, in 2010, the verdict suddenly changed; humans really had bred with Neanderthals, and all people alive today who trace their ancestry to regions outside of Africa carry vestiges of those couplings in their genomes. The discrepancy between the two findings, we learned, was owing to the first being based on mitochondrial DNA and the second on nuclear DNA. Even those anthropology students whose knowledge of human evolution derived mostly from what can be gleaned from the shapes and ages of fossil bones probably understood that since several copies of mitochondrial DNA reside in every cell of a creature’s body, while each cell houses but a single copy of nuclear DNA, this latest feat of gene sequencing must have been an even greater challenge. Yet, at least among anthropologists, the accomplishment got swallowed up in the competition between rival scenarios for how our species came to supplant all the other types of humans. Though, to be fair, there was a bit of marveling among paleoanthropologists at the implications of being some percentage Neanderthal.

            Fortunately for us enthusiasts, in his new book Neanderthal Man: In Search of Lost Genomes, Pääbo, a Swedish molecular biologist now working at the Max Planck Institute in Leipzig, goes some distance toward making it possible for everyone to appreciate the wonder and magnificence of his team’s monumental achievements. It would have been a great service to historians for him to simply recount the series of seemingly insurmountable obstacles the researchers faced at various stages, along with the technological advances and bursts of inspiration that saw them through. But what he’s done instead is pen a deeply personal and sparely stylish paean to the field of paleogenetics and all the colleagues and supporters who helped him create it.

It’s been over sixty years since Watson and Crick, with some help from Rosalind Franklin, revealed the double-helix structure of DNA. But the Human Genome Project, the massive effort to sequence all three billion base pairs that form the blueprint for a human, was completed just over ten years ago. As inexorable as the march of technological progress often seems, the jump from methods for sequencing the genes of living creatures to those of long-extinct species only strikes us as foregone in hindsight. At the time when Pääbo was originally dreaming of ancient DNA, which he first hoped to retrieve from Egyptian mummies, there were plenty of reasons to doubt it was possible. He writes,

When we die, we stop breathing; the cells in our body then run out of oxygen, and as a consequence their energy runs out. This stops the repair of DNA, and various sorts of damage rapidly accumulate. In addition to the spontaneous chemical damage that continually occurs in living cells, there are forms of damage that occur after death, once the cells start to decompose. One of the crucial functions of living cells is to maintain compartments where enzymes and other substances are kept separate from one another. Some of these compartments contain enzymes that break down DNA from various microorganisms that the cell may encounter and engulf. Once an organism dies and runs out of energy, the compartment membranes deteriorate, and these enzymes leak out and begin degrading DNA in an uncontrolled way. Within hours and sometimes days after death, the DNA strands in our body are cut into smaller and smaller pieces, while various other forms of damage accumulate. At the same time, bacteria that live in our intestines and lungs start growing uncontrollably when our body fails to maintain the barriers that normally contain them. Together these processes will eventually dissolve the genetic information stored in our DNA—the information that once allowed our body to form, be maintained, and function. When that process is complete, the last trace of our biological uniqueness is gone. In a sense, our physical death is then complete. (6)

The hope was that amid this nucleic carnage enough pieces would survive to restore a single strand of the entire genome. That meant Pääbo needed lots of organic remains and some really powerful extraction tools. It also meant that he’d need some well-tested and highly reliable methods for fitting the pieces of the puzzle together.

            Along with the sense of inevitability that follows fast on the heels of any scientific advance, the impact of the Neanderthal Genome Project’s success in the wider culture was also dampened by a troubling inability on the part of the masses to appreciate that not all ideas are created equal—that any particular theory is only as good as the path researchers followed to arrive at it and the methods they used to validate it. Sadly, it’s in all probability the very people who would have been the most thoroughly gobsmacked by the findings coming out of the Max Planck Institute whose amazement switches are most susceptible to hijacking at the hands of the charlatans and ratings whores behind shows like Ancient Aliens. More serious than the cheap fictions masquerading as science that abound in pop culture, though, is a school of thought in academia that not only fails to grasp, but outright denies, the value of methodological rigor, charging that the methods themselves are mere vessels for the dissemination of encrypted social and political prejudices.

Such thinking can’t survive even the most casual encounter with the realities of how science is conducted. Pääbo, for instance, describes his team’s frustration whenever rival researchers published findings based on protocols that failed to meet the standards they’d developed to rule out contamination from other sources of genetic material. He explains the common “dilemma in science” whereby

doing all the analyses and experiments necessary to tell the complete story leaves you vulnerable to being beaten to the press by those willing to publish a less complete story that nevertheless makes the major point you wanted to make. Even when you publish a better paper, you are seen as mopping up the details after someone who made the real breakthrough. (115)

The more serious challenge for Pääbo, however, was dialing back extravagant expectations on the part of prospective funders against the backdrop of popular notions propagated by the Jurassic Park movie franchise and extraordinary claims from scientists who should’ve known better. He writes,

As we were painstakingly developing methods to detect and eliminate contamination, we were frustrated by flashy publications in Nature and Science whose authors, on the surface of things, were much more successful than we were and whose accomplishments dwarfed the scant products of our cumbersome efforts to retrieve DNA sequences “only” a few tens of thousands of years old. The trend had begun in 1990, when I was still at Berkeley. Scientists at UC Irvine published a DNA sequence from leaves of Magnolia latahensis that had been found in a Miocene deposit in Clarkia, Idaho, and were 17 million years old. This was a breathtaking achievement, seeming to suggest that one could study DNA evolution on a time scale of millions of years, perhaps even going back to the dinosaurs! (56)

            In the tradition of the best scientists, Pääbo didn’t simply retreat to his own projects to await the inevitable retractions and failed replications but instead set out to apply his own more meticulous extraction methods to the fossilized plant material. He writes,

I collected many of these leaves and brought them back with me to Munich. In my new lab, I tried extracting DNA from the leaves and found they contained many long DNA fragments. But I could amplify no plant DNA by PCR. Suspecting that the long DNA was from bacteria, I tried primers for bacterial DNA instead, and was immediately successful. Obviously, bacteria had been growing in the clay. The only reasonable explanation was that the Irvine group, who worked on plant genes and did not use a separate “clean lab” for their ancient work, had amplified some contaminating DNA and thought it came from the fossil leaves. (57)

With the right equipment, it turns out, you can extract and sequence genetic material from pretty much any kind of organic remains, no matter how old. The problem is that sources of contamination are myriad, and chances are whatever DNA you manage to read is almost sure to be from something other than the ancient creature you’re interested in.

            At the time when Pääbo was busy honing his techniques, many scientists thought genetic material from ancient plants and insects might be preserved in the fossilized tree resin known as amber. Sure enough, in the late 80s and early 90s, George Poinar and Raul Cano published a series of articles in which they claimed to have successfully extracted DNA through tiny holes drilled into chunks of amber to reach embedded bugs and leaves. These articles were in fact the inspiration behind Michael Crichton’s description of how the dinosaurs in Jurassic Park were cloned. But Pääbo had doubts about whether these researchers were taking proper precautions to rule out contamination, and no sooner had he heard about their findings than he started trying to find a way to get his hands on some amber specimens. He writes,

The opportunity to find out came in 1994, when Hendrik Poinar joined our lab. Hendrik was a jovial Californian and the son of George Poinar, then a professor at Berkeley and a well-respected expert on amber and the creatures found in it. Hendrik had published some of the amber DNA sequences with Raul Cano, and his father had access to the best amber in the world. Hendrik came to Munich and went to work in our new clean room. But he could not repeat what had been done in San Luis Obisco. In fact, as long as his blank extracts were clean, he got no DNA sequences at all out of the amber—regardless of whether he tried insects or plants. I grew more and more skeptical, and I was in good company. (58)

Those blank extracts were important not just to test for bacteria in the samples but to check for human cells as well. Indeed, one of the special challenges of isolating Neanderthal DNA is that it looks so much like the DNA of the anatomically modern humans handling the samples and the sequencing machines.

A high percentage of the dust that accumulates in houses is made up of our sloughed off skin cells. And Polymerase Chain Reaction (PCR), the technique Pääbo’s team was using to increase the amount of target DNA, relies on a powerful amplification process which uses rapid heating and cooling to split double helix strands up the middle before fitting synthetic chemicals along each side like an amino acid zipper, resulting in exponential replication. The result is that each fragment of a genome gets blown up, and it becomes impossible to tell what percentage of the specimen’s DNA it originally represented. Researchers then try to fit the fragments end-to-end based on repeating overlaps until they have an entire strand. If there’s a great deal of similarity between the individual you’re trying to sequence and the individual whose cells have contaminated the sample, you simply have no way to know whether you’re splicing together fragments of each individual’s genome. Much of the early work Pääbo did was with extinct mammals like giant ground sloths which were easier to disentangle from humans. These early studies were what led to the development of practices like running blank extracts, which would later help his team ensure that their supposed Neanderthal DNA wasn’t really from modern human dust.

Despite all the claims of million-year-old DNA being publicized, Pääbo and his team eventually had to rein in their frustration and stop “playing the PCR police” (61) if they ever wanted to show their techniques could be applied to an ancient species of human. One of the major events in Pääbo’s life that would make this huge accomplishment a reality was the founding of the Max Planck Institute for Evolutionary Anthropology in 1997. As celebrated as the Max Planck Society is today, though, the idea of an institute devoted to scientific anthropology in Germany at the time had to overcome some resistance arising out of fears that history might repeat itself. Pääbo explains,

As do many contemporary German institutions, the MPS had a predecessor before the war. Its name was the Kaiser Wilhelm Society, and it was founded in 1911. The Kaiser Wilhelm Society had built up and supported institutes around eminent scientists such as Otto Hahn, Albert Einstein, Max Planck, and Werner Heisenberg, scientific giants active at a time when Germany was a scientifically dominant nation. That era came to an abrupt end when Hitler rose to power and the Nazis ousted many of the best scientists because they were Jewish. Although formally independent of the government, the Kaiser Wilhelm Society became part of the German war machine—doing, for example, weapons research. This was not surprising. Even worse was that through its Institute for Anthropology, Human Heredity, and Eugenics the Kaiser Wilhelm Society was actively involved in racial science and the crimes that grew out of that. In that institute, based in Berlin, people like Josef Mengele were scientific assistants while performing on inmates at Auschwitz death camp, many of them children. (81-2)

Even without such direct historical connections, many scholars still automatically leap from any mention of anthropology or genetics to dubious efforts to give the imprimatur of science to racial hierarchies and clear the way for atrocities like eugenic culling or sterilizations, even though no scientist in any field would have truck with such ideas and policies after the lessons of the past century.

            Pääbo not only believed that anthropological science could be conducted without repeating the atrocities of the past; he insisted that allowing history to rule real science out of bounds would effectively defeat the purpose of the entire endeavor of establishing an organization for the study of human origins. Called on as a consultant to help steer a course for the institute he was simultaneously being recruited to work for, Pääbo recalls responding to the administrators’ historical concerns,

Perhaps it was easier for me as a non-German born well after the war to have a relaxed attitude toward this. I felt that more than fifty years after the war, Germany could not allow itself to be inhibited in its scientific endeavors by its past crimes. We should neither forget history nor fail to learn from it, but we should also not be afraid to go forward. I think I even said that fifty years after his death, Hitler should not be allowed to dictate what we could or could not do. I stressed that in my opinion any new institute devoted to anthropology should not be a place where one philosophized about human history. It should do empirical science. Scientists who were to work there should collect real hard facts about human history and test their ideas against them. (82-3)

As it turned out, Pääbo wasn’t alone in his convictions, and his vision of what the institute should be and how it should operate came to fruition with the construction of the research facility in Leipzig.

            Faced with Pääbo’s passionate enthusiasm, some may worry that he’s one of those mad scientists we know about from movies and books, willing to push ahead with his obsessions regardless of the moral implications or the societal impacts. But in fact Pääbo goes a long way toward showing that the popular conception of the socially oblivious scientist who calculates but can’t think, and who solves puzzles but is baffled by human emotions is not just a caricature but a malicious fiction. For instance, even amid the excitement of his team’s discovery that humans reproduced with Neanderthals, Pääbo was keenly aware that his results revealed stark genetic differences between Africans, who have no Neanderthal DNA, and non-Africans, most of whose genomes are between one and four percent Neanderthal. He writes,

When we had come this far in our analyses, I began to worry about what the social implications of our findings might be. Of course, scientists need to communicate the truth to the public, but I feel that they should do so in ways that minimize the chance for it to be misused. This is especially the case when it comes to human history and human genetic variation, when we need to ask ourselves: Do our findings feed into prejudices that exist in society? Can our findings be misrepresented to serve racists’ purposes? Can they be deliberately or unintentionally misused in some other way? (199-200)

In light of the Neanderthal’s own caricature—hunched, brutish, dimwitted—their contribution to non-Africans’ genetic makeup may actually seem like more of a drawback than a basis for any claims of superiority. The trouble would come, however, if some of these genes turned out to confer adaptive advantages that made their persistence in our lineage more likely. There are already some indications, for instance, that Neanderthal-human hybrids had more robust immune responses to certain diseases. And the potential for further discoveries along these lines is limitless.

            Neanderthal Man explores the personal and political dimensions of a major scientific undertaking, but it’s Pääbo’s remembrances of what it was like to work with the other members of his team that bring us closest to the essence of what science is—or at least what it can be. At several points along the team’s journey, they were faced with a series of setbacks and technical challenges that threatened to sink the entire endeavor. Pääbo describes what it was like when during one critical juncture where things looked especially dire everyone brought their heads together in weekly meetings to try to come up with solutions and assign tasks:

To me, these meetings were absorbing social and intellectual experiences: graduate students and postdocs know that their careers depend on the results they achieve and the papers they publish, so there is always a certain amount of jockeying for opportunity to do the key experiments and to avoid doing those that may serve the group’s aim but will probably not result in prominent authorship on an important publication. I had become used to the idea that budding scientists were largely driven by self-interest, and I recognized that my function was to strike a balance between what was good for someone’s career and what was necessary for a project, weighing individual abilities in this regard. As the Neanderthal crisis loomed over the group, however, I was amazed to see how readily the self-centered dynamic gave way to a more group-centered one. The group was functioning as a unit, with everyone eagerly volunteering for thankless and laborious chores that would advance the project regardless of whether such chores would bring any personal glory. There was a strong sense of common purpose in what all felt was a historic endeavor. I felt we had the perfect team. In my more sentimental moments, I felt love for each and every person around the table. This made the feeling that we’d achieved no progress all the more bitter. (146-7)

Those “more sentimental moments” of Pääbo’s occur quite frequently, and he just as frequently describes his colleagues, and even his rivals, in a way that reveals his fondness and admiration for them. Unlike James Watson, who in The Double Helix, his memoir of how he and Francis Crick discovered the underlying structure of DNA, often comes across as nasty and condescending, Pääbo reveals himself to be bighearted, almost to a fault.

            Alongside the passion and the drive, we see Pääbo again and again pausing to reflect with childlike wonder at the dizzying advancement of technology and the incredible privilege of being able to carry on such a transformative tradition of discovery and human progress. He shows at once the humility of recognizing his own limitations and the restless curiosity that propels him onward in spite of them. He writes,

My twenty-five years in molecular biology had essentially been a continuous technical revolution. I had seen DNA sequencing machines come on the market that rendered into an overnight task the toils that took me days and weeks as a graduate student. I had seen cumbersome cloning of DNA in bacteria be replaced by the PCR, which in hours achieved what had earlier taken weeks or months to do. Perhaps that was what had led me to think that within a year or two we would be able to sequence three thousand times more DNA than what we had presented in the proof-of-principle paper in Nature. Then again, why wouldn’t the technological revolution continue? I had learned over the years that unless a person was very, very smart, breakthroughs were best sought when coupled to big improvements in technologies. But that didn’t mean we were simply prisoners awaiting rescue by the next technical revolution. (143)

Like the other members of his team, and like so many other giants in the history of science, Pääbo demonstrates an important and rare mix of seemingly contradictory traits: a capacity for dogged, often mind-numbing meticulousness and a proclivity toward boundless flights of imagination.

Denisovan Finger Bone
What has been the impact of Pääbo and his team’s accomplishments so far? Their methods have already been applied to the remains of a 400,000-year-old human ancestor, led to the discovery of completely new species of hominin known as Denisovans (based on a tiny finger bone), and are helping settle a longstanding debate about the peopling of the Americas. The out-of-Africa hypothesis is, for now, the clear victor over the multiregionalist hypothesis, but of course the single origin theory has become more complicated. Many paleoanthropologists are now talking about what Pääbo calls the “Leaky replacement” model (248). Aside from filling in some of the many gaps in the chronology of humankind’s origins and migrations—or rather fitting together more pieces in the vast mosaic of our species’ history—every new genome helps us to triangulate possible functions for specific genes. As Pääbo explains, “The dirty little secret of genomics is that we still know next to nothing about how a genome translates into the particularities of a living and breathing individual” (208). But knowing the particulars of how human genomes differ from chimp genomes, and how both differ from the genomes of Neanderthals, or Denisovans, or any number of living or extinct species of primates, gives us clues about how those differences contribute to making each of us who and what we are. The Neanderthal genome is not an end-point but rather a link in a chain of discoveries. Nonetheless, we owe Svante Pääbo a debt of gratitude for helping us to appreciate what all went into the forging of this particular, particularly extraordinary link. 



Saturday, February 22, 2014

The Time for Tales and A Tale for the Time Being

Storytelling in the twenty-first century is a tricky business. People are faced with too many real-world concerns to be genuinely open to the possibility of caring what happens to imaginary characters in a made-up universe. Some of us even feel a certain level of dread as we read reviews or lists of the best books of any given year, loath to add yet another modern classic to the towering mental stack we resolve to read. Yet we’re hardly ever as grateful to anyone as we are to an author or filmmaker who seduces us with a really good story. No sooner have we closed a novel whose characters captured our heart and whose plights succeeded in making us forget our own daily anxieties, however briefly, than we feel compelled to proselytize on behalf of that story to anyone whose politeness we can exploit long enough to deliver the message. The enchantment catches us so unawares, and strikes us as so difficult to account for according to any reckoning of daily utility, that it feels perfectly natural to relay our literary experiences in religious language. But, as long as we devote sufficient attention to an expertly wrought, authentically heartfelt narrative, even those of us with the most robust commitment to the secular will find ourselves succumbing to the magic.

The pages of Ruth Ozeki’s novel A Tale for the Time Being are brimful with the joys and heartache, not just of fiction in general, but of literary fiction in particular, with the bite of reality that sinks deeper than that of the commercial variety, which sacrifices lasting impact for immediate thrills—this despite the Gothic and science fiction elements Ozeki judiciously sprinkles throughout her chapters. The story is about a woman who becomes obsessed with a story despite herself because she can’t help worrying about the fate of the teenage girl who wrote it. This woman, a novelist who curiously bears the same first name as the novel’s author, only reluctantly brings a barnacle-covered freezer bag concealing a Hello Kitty lunchbox to her home after discovering it washed up on a beach off the coast of British Columbia. And it’s her husband Oliver—named after Ozeki’s real-life husband—who brings the bag into the kitchen so he can examine the contents, over her protests. Inside the lunchbox, Oliver finds a stack of letters written in Japanese and dating to the Second World War. With them, there is a diary written in French, and what at first appears to be a copy of Proust’s In Search of Lost Time but turns out to be another diary, with pages in the handwritten English of a teenage Japanese girl named Nao, pronounced much like the word “now.”

Ruth begins reading the diary, sometimes by herself, sometimes to Oliver before they go to bed. Nao addresses her reader very personally, and delights in the idea that she may be sharing her story with a single individual. As she poses one question after another to this lone reader—who we know has turned out to be Ruth—Nao’s loneliness, her desperate need to unburden herself, wraps itself around Ruth and readers of Ozeki’s novel alike. In the early passages, we learn that Nao’s original plan for the pages under Proust’s cover was to write a biography of her 104-year-old great grandmother Jiko, a Buddhist nun living in a temple in the Miyagi Prefecture, just north of the Fukishima nuclear power plant, and then toss it into the waves of the Pacific for a single beachcomber to find. Contemplating the significance of her personal gesture toward some anonymous future reader, she writes,

If you ask me, it’s fantastically cool and beautiful. It’s like a message in a bottle, cast out onto the ocean of time and space. Totally personal, and real, too, right out of old Jiko’s and Marcel’s prewired world. It’s the opposite of a blog. It’s an antiblog, because it’s meant for only one special person, and that person is you. And if you’ve read this far, you probably understand what I mean. Do you understand? Do you feel special yet? (26)

Nao does manage to write a lot about her own experiences with her great grandmother, and we learn a bit about Jiko’s life before Nao was born. But for the most part Nao’s present troubles override her intentions to write about someone else’s life. Meanwhile, the character Ruth is trying to write a memoir about nursing her mother, who has recently died after a long struggle with Alzheimer’s, but she’s also getting distracted by Nao’s tribulations, even to the point of conducting a search for the girl based on whatever telling details she comes across in the diary.

            All great stories pulse with resonances from great stories past, and A Tale for the Time Being, whether as a result of human commonalities, direct influence, or cultural osmosis, reverberates with the tones and themes of Catcher in the Rye, Donnie Darko, The Karate Kid, My Girl, and the first third of Roberto Bolaño’s 2666. The characters in Ozeki’s multiple embedded and individually shared narratives fascinate themselves with, even as they reluctantly succumb to, the mystery of communion between storyteller and audience. But, since the religion—or perhaps rather the philosophy—that suffuses the characters’ lives is not Christian but Buddhist, it’s only fitting that the focus is on how narrative time flows according to a type of eternal present, always waking us up to the present moment. The surprise in store for readers of the novel, though, is that narrative communion—the bond of compassion between narrators and audiences—is intricately tied to our experience of time. This connection comes through in the double meaning of the phrase “for the time being,” which conveys that whatever you’re discussing, though not ideal, is sufficient for now, until something more suitable comes along. But it can also refer to a being existing in time—hence to all beings. As Nao explains in the early pages of her diary,

A time being is someone who lives in time, and that means you, and me, and every one of us who is, or was, or ever will be. As for me, right now I am sitting in a French maid Café in Akiba Electricity Town, listening to a sad chanson that is playing sometime in your past, which is also my present, writing this and wondering about you, somewhere in my future. And if you’re reading this, then maybe by now you’re wondering about me, too. (3)

Time is what connects every being—and, as Jiko insists, every object—that moves through it, moment by moment. Further, as Nao demonstrates, the act of narrating, in collusion with a separate act of attending, renders any separation in time somewhat moot.

            Revealing that the contrapuntal narration of A Tale for the Time Being represents the flowering of a relationship between two women who never even meet, whose destinies, it is tantalizingly suggested, may not even be unfolding in the same universe, will likely do nothing to blunt the impact of the story, for the same reasons the characters continually find themselves contemplating. Ruth, once she’s fully engaged in Nao’s story, becomes frustrated at how hard it is “to get a sense from the diary of the texture of time passing.” She wants so much to understand what Nao was going through—what she is going through in the story—but, as she explains, “No writer, even the most proficient, could re-enact in words the flow of a life lived, and Nao was hardly that skillful” (64). We may chuckle at how Ozeki is covering her tracks with this line about her prodigiously articulate sixteen-year-old protagonist. But she’s also showing just how powerful Ruth’s urge is to keep up her end of the connection. This paradox of urgent inconsequence is introduced by Nao on the first page of her diary. After posing a series of questions about the reader, she writes,  

Actually, it doesn’t matter very much, because by the time you read this, everything will be different, and you will be nowhere in particular, flipping idly through the pages of this book, which happens to be the diary of my last days on earth, wondering if you should keep on reading. (3)

In an attempt to mimic the flow of time as Nao experienced it, Ruth paces her reading of the diary to correspond with the spans between entries. All the while, though, she’s desperate to find out what happened—what happens—because she’s worried that Nao may have killed herself, or may still be on the verge of doing so.

            Nao wrote her diary in English because she spent most of her life in Sunnyvale, California, where her father Haruki worked for a tech firm. After he lost his job for what Nao assumes are economic reasons, he moved with her and her mother back to Tokyo. Nao starts having difficulties at school right away, but she’s reluctant to transfer to what she calls a stupid kids’ school because, as she explains,

It’s probably been a while since you were in junior high school, but if you can remember the poor loser foreign kid who entered your eighth-grade class halfway through the year, then maybe you will feel some sympathy for me. I was totally clueless about how you’re supposed to act in a Japanese classroom, and my Japanese sucked, and at the time I was almost fifteen and older than the other kids and big for my age, too, from eating so much American food. Also, we were broke so I didn’t have an allowance or any nice stuff, so basically I got tortured. In Japan they call it ijime, but that word doesn’t begin to describe what the kids used to do to me. I would probably already be dead if Jiko hadn’t taught me how to develop my superpower. Ijime is why it’s not an option for me to go to a stupid kids’ school, because in my experience, stupid kids can be even meaner than smart kids because they don’t have as much to lose. School just isn’t safe. (44)

The types of cruelty Nao is subjected to are insanely creative, and one of her teachers even participates so he can curry favor with the popular students. They pretend she’s invisible and only detectable by a terrible odor she exudes—and she perversely begins to believe she really does give off a tell-tale scent of poverty. They pinch her and pull out strands of her hair. At one point, they even stage an elaborate funeral for her and post a video of it online. Curiously, though, when Ruth searches for the title, nothing comes up.

            Unfortunately, the extent of Nao’s troubles stretches beyond the walls of the school. Her father, whom she and her mother believed to have been hired for a new job, tries to kill himself by lying down in front of a train. But the engineers see him in time to stop, and all he manages to do is incur a fine. Once Nao’s mother brings him home, he reveals the new job was a lie, that he’s been earning money gambling, and that now he’s lost nearly every penny. One of the few things Ruth finds in her internet searches is an essay by Nao’s father about why he and other Japanese men are attracted to the idea of suicide. She decides to email the professor who posted this essay, claiming that it’s “urgent” for her to find the author and his daughter. When Oliver reminds Ruth that all of this occurred some distance in the past, she’s dumbfounded. As the third-person narrator of the Ruth chapters explains,

It wasn't that she'd forgotten, exactly. The problem was more a kind of slippage. When she was writing a novel, living deep inside a fictional world, the days got jumbled together, and entire weeks or months or even years would yield to the ebb and flow of the dream. Bills went unpaid, emails unanswered, calls unreturned. Fiction had its own time and logic. That was its power. But the email she’d just written to the professor was not fiction. It was real, as real as the diary. (313-4)

Before ending up in that French maid café (the precise character of which is a bone of contention between Ruth and Oliver) trying to write Jiko’s life story, Nao had gone, at her parents’ prompting, to spend a summer with her great grandmother at the temple. And it’s here she learns to develop what Jiko calls her superpower. Of course, as she describes the lessons in her diary, Nao is also teaching to Ruth what Jiko taught to her. So Ozeki’s readers get the curious sense that not only is Ruth trying to care for Nao but Nao is caring for Ruth in return.

During her stay at the temple, Nao learns that Jiko became a nun after her son, Nao’s great uncle and her father’s namesake, died in World War II. It is the letters he wrote home as he underwent pilot training that Oliver and Ruth find with Nao’s diary, and they eventually learn that the other diary, the one written in French, is his as well. Even though Nao only learns about the man she comes to call Haruki #1 from her grandmother’s stories, along with the letters and a few ghostly visitations, she develops a type of bond with him, identifying with his struggles, admiring his courage. At the same time, she comes to sympathize with and admire Jiko all the more for how she overcame the grief of losing her son. “By the end of the summer, with Jiko’s help,” Nao writes,

I was getting stronger. Not Just strong in my body, but strong in my mind. In my mind, I was becoming a superhero, like Jubei-chan, the Samurai Girl, only I was Nattchan, the Super Nun, with abilities bestowed upon me by Lord Buddha that included battling the waves, even if I always lost, and being able to withstand astonishing amounts of pain and hardship. Jiko was helping me cultivate my supapawa! by encouraging me to sit zazen for many hours without moving, and showing me how not to kill anything, not even the mosquitoes that buzzed around my face when I was sitting in the hondo at dusk or lying in bed at night. I learned not to swat them even when they bit me and also not to scratch the itch that followed. At first, when I woke up, my face and arms were swollen from the bites, but little by little, my blood and skin grew tough and immune to their poison and I didn’t break out in bumps no matter how much I’d been bitten. And soon there was no difference between me and the mosquitoes. My skin was no longer a wall that separated us, and my blood was their blood. I was pretty proud of myself, so I went and found Jiko and I told her. She smiled. (204)

Nao is learning to break through the walls that separate her from others by focusing her mind on simply being, but the theme Ozeki keeps bringing us back to in her novel is that it’s not only through meditation that we open our minds compassionately to others.

            Despite her new powers, though, Nao’s problems at home and at school only get worse when she returns from her sojourn at the temple. And, as the urgency of her story increases, a long-simmering tension between Ruth and Oliver boils over into spats and insults. The added strain stems not so much from their different responses to Nao’s story as it does from the way the narration resonates with many of Ruth’s own feelings and experiences, making her more aware of all that’s going on in her own mind. When Nao explains how she came to be writing in a blanked out copy of In Search of Lost Time, for instance, she describes how the woman who makes the diaries “does it so authentically you don’t even notice the hack, and you almost think that the letters just slipped off the pages and fell to the floor like a pile of dead ants” (20). Later, Ruth reflects on how,

When she was little, she was always surprised to pick up a book in the morning, and open it, and find the letters aligned neatly in their places. Somehow she expected them to be all jumbled up, having fallen to the bottom when the covers were shut. Nao had described something similar, seeing the blank pages of Proust and wondering if the letters had fallen off like dead ants. When Ruth had read this, she’d felt a jolt of recognition. (63)

The real source of the trouble, though, is Nao’s description of her father as a hikikomori, which Ruth defines in one of the footnotes that constantly remind readers of her presence in Nao’s story as a “recluse, a person who refuses to leave the house” (70). Having agreed to retreat to the sparsely populated Whaletown, on the tiny island of Cortes, to care for her mother and support Oliver as he undertook a series of eminently eccentric botanical projects, part art and part ecological experimentations—the latest of which he calls the “Neo-Eocene”—Ruth is now beginning to long for the bustling crowds and hectic conveniences of urban civilization: “Engulfed by the thorny roses and massing bamboo, she stared out the window and felt like she’d stepped into a malevolent fairy tale” (61).

           A Tale for the Time Being is so achingly alive the characters’ thoughts and words lift up off the pages as if borne aloft on clouds of their own breath. Ironically, though, if there’s one character who suffers from overscripting it’s the fictional counterpart to Ozeki’s real-life husband. Oliver’s role is to see what Ruth cannot, moving the plot forward with strained scientific explanations of topics ranging from Pacific gyres, to Linnaean nomenclature, to quantum computing, raising the suspicion that he’s serving as the personification of the author’s own meticulous research habits. Under the guise of the masculine tendency to launch into spontaneous lectures—which to be fair Ozeki has at least one female character manifesting as well—Oliver at times transforms from a living character into a scholarly mouthpiece, performing a parody of professorial pedantism. For the most part, though, the philosophical ruminations in the novel emanate so naturally from the characters and are woven so seamlessly into the narrative that you can’t help following along with the characters' thoughts, as if taking part in a conversation. When Nao writes about discovering through a Google search that “A la recherché du temps perdu” means “In search of lost time,” she encourages her reader to contemplate what that means along with her:

Weird, right? I mean, there I was, sitting in a French maid café in Akiba, thinking about lost time, and old Marcel Proust was sitting in France a hundred years ago, writing a whole book about the exact same subject. So maybe his ghost was lingering between the covers and hacking into my mind, or maybe it was just a crazy coincidence, but either way, how cool is that? I think coincidences are cool, even if they don’t mean anything, and who knows? Maybe they do! I’m not saying everything happens for a reason. It was more just that it felt as if me and old Marcel were on the same wavelength. (23)

Waves and ghosts and crazy coincidences make up some of the central themes of the novel, but underlying them all is the spooky connectedness we humans so readily feel with one another, even against the backdrop of our capacity for the most savage cruelty.

            A Tale for the Time Being balances its experimental devices with time-honored storytelling techniques. It’s central conceits are emblematic of today’s reigning fictional aesthetic, which embodies a playful exploration of the infinite possibilities of all things meta—stories about stories, narratives pondering the nature of narrative, authors becoming characters, characters becoming authors, fiction that’s indistinguishable from nonfiction and vice versa. We might call this tradition jootsism, after computational neuroscientist Douglas Hofstadter’s term jootsing, or jumping out of the system, which he theorizes is critical to the emergence of consciousness from physical substrates, whether biochemical or digital. Some works in this tradition fatally undermine the systems they jump out of, revealing that the story readers are emotionally invested in is a hoax. Indeed, one strain of literary theory that’s been highly influential over the past four decades equates taking pleasure from stories with indulging in the reaffirmation of prejudices against minorities. Authors in this school therefore endeavor to cast readers out of their own narratives as punishment for their complicity in societal oppression. Fortunately, the types of gimmickry common to this brand of postmodernism—deliberately obnoxious and opaquely nonsensical neon-lit syntactical acrobatics—appear to have drastically declined in popularity, though the urge toward guilt-tripping readers and the obsession with the most harebrained and pusillanimous forms of identity politics persist. There are hidden messages in all media, we’re taught to believe, and those messages are the lifeblood of all that’s evil in our civilization.

But even those of us who don’t subscribe to the ideology that sees all narratives as sinister political allegories are still looking to be challenged, and perhaps even enlightened, every time we pick up a new book. If there is a set of conventions realist literary fiction is trying to break itself free of, it’s got to be the standard lineup of what masquerade as epiphanies but are really little more than varieties of passive acceptance or world-weary acquiescence in the face of life’s inexorables: disappointment, death, the delimiting of opportunity, the dimming of beauty, the diminishing of faculties, the desperate need for love, the absence of any ideal candidate for love. The postmodern wing of the avant-garde offers nothing in place of these old standbys but meaningless antics and sadomasochistic withholdings of the natural pleasure humans derive from sharing stories. While the theme that emerges from all the jootsing in A Tale for the Time Being, a theme that paradoxically produces its own proof, is that the pleasure we get from stories comes from a kind of magic—the superpower of the storyteller, with the generous complicity of the reader—and that this is the same magic that forms the bonds between us and our loved ones. In a world where teenagers subject each other to unspeakable cruelty, where battling nations grind bodies by the boatload into lifeless, nameless mush, there is still solace to be had, and hope, in the daily miracle of how easily we’re made to feel the profoundest sympathy for people we never even meet, simply by experiencing their stories.



Friday, January 24, 2014

Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

            Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

Joshua Greene
            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

Tomkow.com
According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

            The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.
Joseph Henrich

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

Jonathan Haidt
In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

Emily Nussbaum
            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

            The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

            For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.