This site is moving to a new domain: check out

“Some of these stories are closer to my own life than others are, but not one of them is as close as people seem to think.” Alice Murno, from the intro to Moons of Jupiter

"Talent hits a target no one else can hit; genius hits a target no one else can see." Arthur Schopenhauer

“Why does everything you know, and everything you’ve learned, confirm you in what you believed before? Whereas in my case, what I grew up with, and what I thought I believed, is chipped away a little and a little, a fragment then a piece and then a piece more. With every month that passes, the corners are knocked off the certainties of this world: and the next world too. Show me where it says, in the Bible, ‘Purgatory.’ Show me where it says ‘relics, monks, nuns.’ Show me where it says ‘Pope.’” –Thomas Cromwell imagines asking Thomas More—Wolf Hall by Hilary Mantel

My favorite posts to get started: The Self-Righteousness Instinct, Sabbath Says, Encounters, Inc., and What Makes "Wolf Hall" so Great?.

Friday, March 30, 2012

Jonathan Haidt's Magnificent Presentation of D.S. Wilson's Theories

For multi-level selection theory, check out Wilson's work with Eliot Sober, Unto Others.
For the theory of how religion evolved through multilevel selection see Wilson's Darwin's Cathedral.
For a review of Haidt's new book, The Righteous Mind, go here.

Tuesday, March 27, 2012

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

            The appeal of post-apocalyptic stories stems from the joy of experiencing anew the birth of humanity. The renaissance never occurs in M.T. Anderson’s Feed, in which the main character is rendered hopelessly complacent by the entertainment and advertising beamed directly into his brain. And it is that very complacency, the product of our modern civilization's unfathomable complexity, that most threatens our sense of our own humanity. There was likely a time, though, when small groups composed of members of our species were beset by outside groups composed of individuals of a different nature, a nature that when juxtaposed with ours left no doubt as to who the humans were. 
M.T. Anderson

      In Suzanne Collins’ The Hunger Games, Katniss Everdeen reflects on how the life-or-death stakes of the contest she and her fellow “tributes” are made to participate in can transform teenage boys and girls into crazed killers. She’s been brought to a high-tech mega-city from District 12, a mining town as quaint as the so-called Capitol is futuristic. Peeta Mellark, who was chosen by lottery as the other half of the boy-girl pair of tributes from the district, has just said to her, “I want to die as myself…I don’t want them to change me in there. Turn me into some kind of monster that I’m not.” Peeta also wants “to show the Capitol they don’t own me. That I’m more than just a piece in their Games.” The idea startles Katniss, who at this point is thinking of nothing but surviving the games—knowing full well that there are twenty-two more tributes and only one will be allowed to leave the arena alive. Annoyed by Peeta’s pronouncement of a higher purpose, she thinks,

Suzanne Collins
We will see how high and mighty he is when he’s faced with life and death. He’ll probably turn into one of those raging beast tributes, the kind who tries to eat someone’s heart after they’ve killed them. There was a guy like that a few years ago from District 6 called Titus. He went completely savage and the Gamemakers had to have him stunned with electric guns to collect the bodies of the players he’d killed before he ate them. There are no rules in the arena, but cannibalism doesn’t play well with the Capitol audience, so they tried to head it off. (141-3)

Cannibalism is the ultimate relinquishing of the mantle of humanity because it entails denying the humanity of those being hunted for food. It’s the most basic form of selfishness: I kill you so I can live.

Cormac McCarthy
            The threat posed to humanity by hunger is also the main theme of Cormac McCarthy’s The Road, the story of a father and son wandering around the ruins of a collapsed civilization. The two routinely search abandoned houses for food and supplies, and in one they discover a bunch of people locked in a cellar. The gruesome clue to the mystery of why they’re being kept is that some have limbs amputated. The men keeping them are devouring the living bodies a piece at a time. After a harrowing escape, the boy, understandably disturbed, asks, “They’re going to kill those people, arent they?” His father, trying to protect him from the harsh reality, answers yes, but tries to be evasive, leading to this exchange:

            Why do they have to do that?
            I dont know.
            Are they going to eat them?
            I dont know.
            They’re going to eat them, arent they?
            And we couldnt help them because then they’d eat us too.
            And that’s why we couldnt help them.

But of course it’s not okay. After they’ve put some more distance between them and the human abattoir, the boy starts to cry. His father presses him to explain what’s wrong:

            Just tell me.
            We wouldnt ever eat anybody, would we?
            No. Of course not.
            Even if we were starving?
            We’re starving now.
            You said we werent.
            I said we werent dying. I didn’t say we werent starving.
            But we wouldnt.
            No. We wouldnt.
            No matter what.
            No. No matter what.
            Because we’re the good guys.
            And we’re carrying the fire.
            And we’re carrying the fire. Yes.
            Okay. (127-9)

And this time it actually is okay because the boy, like Peeta Mellark, has made it clear that if the choice is between dying and becoming a monster he wants to die. This preference for death over depredation of others is one of the hallmarks of humanity, and it poses a major difficulty for economists and evolutionary biologists alike. How could this type of selflessness possibly evolve?
John von Neumann

            John von Neumann, one of the founders of game theory, served an important role in developing the policies that have so far prevented the real life apocalypse from taking place. He is credited with the strategy of Mutually Assured Destruction, or MAD (he liked amusing acronyms), that prevailed during the Cold War. As the name implies, the goal was to assure the Soviets that if they attacked us everyone would die. Since the U.S. knew the same was true of any of our own plans to attack the Soviets, a tense peace, or Cold War, was the inevitable result. But von Neumann was not at all content with this peace. He devoted his twilight years to pushing for the development of Intercontinental Ballistic Missiles (ICBMs) that would allow the U.S. to bomb Russia without giving the Soviets a chance to respond. In 1950, he made the infamous remark that inspired Dr. Strangelove: If you say why not bomb them tomorrow, I say, why not today. If you say today at five o’clock, I say why not one o’clock?”
           Von Neumann’s eagerness to hit the Russians first was based on the logic of game theory, and that same logic is at play in The Hunger Games and other post-apocalyptic fiction. The problem with cooperation, whether between rival nations or between individual competitors in a game of life-or-death, is that it requires trust—and once one player begins to trust the other he or see becomes vulnerable to exploitation, the proverbial stab in the back from the person who’s supposed to be watching it. Game theorists model this dynamic with a thought experiment called the Prisoner’s Dilemma. Imagine two criminals are captured and taken to separate interrogation rooms. Each criminal has the option of either cooperating with the other criminal by remaining silent or betraying him or her by confessing. Here’s a graph of the possible outcomes:
From Wikipedia
No matter what the other player does, each of them achieves a better outcome by confessing. Von Neumann saw the standoff between the U.S. and the Soviets as a Prisoner’s Dilemma; by not launching nukes, each side was cooperating with the other. Eventually, though, one of them had to realize that the only rational thing to do was be the first to defect.

But the way humans play games is a bit different. As it turned out, von Neumann was wrong about the game theory implications of the Cold War—neither side ever did pull the trigger; both prisoners kept their mouth shut. In Collins' novel, Katniss faces a Prisoner's Dilemma every time she encounters another tribute who may be willing to team up with her in the hunger game. The graph for her and Peeta looks like this:


Both improve chances of making it to final round.

Killing Peeta is easier.
Peeta’s strength and resourcefulness are wasted.
Killing Katniss is easier.
Katniss’ knowledge and skills are wasted.
Both avoid risks associated with betrayal.
Both miss out on benefits of other’s abilities.
                                                                             In the context of the hunger games, then, it makes sense to team up with rivals as long as they have useful skills, knowledge, or strength. Each tribute knows, furthermore, that as long as he or she is useful to a teammate, it would be irrational for that teammate to defect.

            The Prisoner’s Dilemma logic gets much more complicated when you start having players try to solve it over multiple rounds of play. Game theorists refer to each time a player has to make a choice as an iteration. And to model human cooperative behavior you have to not only have multiple iterations but also find a way to factor in each player’s awareness of how rivals have responded to the dilemma in the past. Humans have reputations. Katniss, for instance, doesn’t trust the Career tributes because they have a reputation for being ruthless. She even begins to suspect Peeta when she sees that he’s teamed up with the Careers. (His knowledge of Katniss is a resource to them, but he’s using that knowledge in an irrational way—to protect her instead of himself.) On the other hand, Katniss trusts Rue because she's young and dependent—and because she comes from an adjacent district not known for sending tributes who are cold-blooded.

            When you have multiple iterations and reputations, you also open the door for punishments and rewards. At the most basic level, people reward those who they witness cooperating by being more willing to cooperate with them. As we read or watch The Hunger Games, we can actually experience the emotional shift that occurs in ourselves as we witness Katniss’s cooperative behavior. People punish those who defect by being especially reluctant to trust them. At this point, the analysis is still within the realm of the purely selfish and rational. But you can’t stay in that realm for very long when you’re talking about the ways humans respond to one another.

            Each time Katniss encounters another tribute in the games she faces a Prisoner’s Dilemma. Until the final round, the hunger games are not a zero-sum contest—which means that a gain for one doesn’t necessarily mean a loss for the other. Ultimately, of course, Katniss and Peeta are playing a zero-sum game; since only one tribute can win, one of any two surviving players at the end will have to kill the other (or let him die). Every time one tribute kills another, the math of the Prisoner’s Dilemma has to be adjusted. Peeta, for instance, wouldn’t want to betray Katniss early on, while there are still several tributes trying to kill them, but he would want to balance the benefits of her resources with whatever advantage he could gain from her unsuspecting trust—so as they approach the last few tributes, his temptation to betray her gets stronger. Of course, Katniss knows this too, and so the same logic applies for her.

            As everyone who’s read the novel or seen the movie knows, however, this isn’t how either Peeta or Katniss plays in the hunger games. And we already have an idea of why that is: Peeta has said he doesn’t want to let the games turn him into a monster. Figuring out the calculus of the most rational decisions is well and good, but humans are often moved by their emotions—fear, affection, guilt, indebtedness, love, rage—to behave in ways that are completely irrational—at least in the near term. Peeta is in love with Katniss, and though she doesn’t really quite trust him at first, she proves willing to sacrifice herself in order to help him survive. This goes well beyond cooperation to serve purely selfish interests.

            Many evolutionary theorists believe that at some point in our evolutionary history, humans began competing with each other to see who could be the most cooperative. This paradoxical idea emerges out of a type of interaction between and among individuals called costly signaling. Many social creatures must decide who among their conspecifics would make the best allies. And all sexually reproducing animals have to have some way to decide with whom to mate. Determining who would make the best ally or who would be the fittest mate is so important that only the most reliable signals are given any heed. What makes the signals reliable is their cost—only the fittest can afford to engage in costly signaling. Some animals have elaborate feathers that are conspicuous to predators; others have massive antlers. This is known as the handicap principle. In humans, the theory goes, altruism somehow emerged as a costly signal, so that the fittest demonstrate their fitness by engaging in behaviors that benefit others to their own detriment. The boy in The Road , for instance, isn’t just upset by the prospect of having to turn to canibalism himself; he’s sad that he and his father weren’t able to help the other people they found locked in the cellar.

            We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds? But when it really comes down to it, when it really matters most, both Katniss and Peeta demonstrate that they’re willing to protect one another even at a cost to themselves.

            The birth of humanity occurred, somewhat figuratively, when people refused to play the game of me versus you and determined instead to play us versus them. Humans don’t like zero-sum games, and whenever possible they try to change to the rules so there can be more than one winner. To do that, though, they have to make it clear that they would rather die than betray their teammates. In The Road, the father and his son continue to carry the fire, and in The Hunger Games Peeta gets his chance to show he’d rather die than be turned into a monster. By the end of the story, it’s really no surprise what Katniss choses to do either. Saving her sister may not have been purely altruistic from a genetic standpoint. But Peeta isn’t related to her, nor is he her only—or even her most eligible—suitor. Still, her moments of cold strategizing notwithstanding, we've had her picked as an altruist all along.

            Of course, humanity may have begun with the sense that it’s us versus them, but as it’s matured the us has grown to encompass an ever wider assortment of people and the them has receded to include more and more circumscribed groups of evil-doers. Unfortunately, there are still all too many people who are overly eager to treat unfamiliar groups as rival tribes, and all too many people who believe that the best governing principle for society is competition—the war of all against all. Altruism is one of the main hallmarks of humanity, and yet some people are simply more altruistic than others. Let’s just hope that it doesn’t come down to us versus them…again. 

Tuesday, March 20, 2012

Life's White Machine: James Wood and What Doesn't Happen in Fiction

            No one is a better reader of literary language than James Wood. In his reviews, he conveys with grace and precision his uncanny feel for what authors set out to say, what they actually end up saying, and what any discrepancy might mean for their larger literary endeavor. He effortlessly and convincingly infers from the lurch of faulty lines the confusions and pretentions and lacuna in understanding of struggling writers. Some take steady aim at starkly circumscribed targets, his analysis suggests, while others, desperate to achieve some greater, more devastating impact, shoot wistfully into the clouds. He can even listen to the likes of republican presidential nominee Rick Santorum and explain, with his seemingly eidetic knowledge of biblical history, what is really meant when the supposed Catholic uses the word steward.

            As a critic, Wood’s ability to see character in narration and to find the author, with all his conceits and difficulties, in the character is often downright unsettling. For him there exists no divide between language and psychology—literature is the struggle of conflicted minds to capture the essence of experiences, their own and others’.

When Robert Browning describes the sound of a bird singing its song twice over, in order to ‘recapture/ The first fine careless rapture,’ he is being a poet, trying to find the best poetic image; but when Chekhov, in his story ‘Peasants,’ says that a bird’s cry sounded as if a cow had been locked up in a shed all night, he is being a fiction writer: he is thinking like one of his peasants. (24)

This is from Wood’s How Fiction Works. In the midst of a long paean to the power of free indirect style, the technique that allows the language of the narrator to bend toward and blend with the thoughts and linguistic style of characters—moving in and out of their minds—he deigns to mention, in a footnote, an actual literary theory, or rather Literary Theory. Wood likes Nabokov’s scene in the novel Pnin that has the eponymous professor trying to grasp a nutcracker in a sink full of dishes. The narrator awkwardly calls it a “leggy thing” as it slips through his grasp. “Leggy” conveys the image. “But ‘thing’ is even better, precisely because it is vague: Pnin is lunging at the implement, and what word in English better conveys a messy lunge, a swipe at verbal meaning, than ‘thing’?” (25) The vagueness makes of the psychological drama a contagion. There could be no symbol more immediately felt.

            The Russian Formalists come into Wood’s discussion here. Their theory focused on metaphors that bring about an “estranging” or “defamiliarizing” effect. Wood would press them to acknowledge that this making strange of familiar objects and experiences works in the service of connecting the minds of the reader with the mind of the character—it’s anything but random:

But whereas the Russian Formalists see this metaphorical habit as emblematic of the way that fiction does not refer to reality, is a self-enclosed machine (such metaphors are the jewels of the author’s freakish, solipsistic art), I prefer the way that such metaphors, as in Pnin’s “leggy thing,” refer deeply to reality: because they emanate from the characters themselves, and are fruits of free indirect style. (26)

Language and words and metaphors, Wood points out, by their nature carry us toward something that is diametrically opposed to collapsing in ourselves. Indeed, there is something perverse about the insistence of so many professional scholars devoted to the study of literature that the main thrust of language is toward some unacknowledged agenda of preserving an unjust status quo—with the implication that the only way to change the world is to torture our modes of expression, beginning with literature (even though only a tiny portion of most first world populations bother to read any).

I'm not a Rick Moody fan, so here's a pic of Hank Moody,
who famously said, "Literary theory? None for me thanks."
            For Wood, fiction is communion. This view has implications about what constitutes the best literature—all the elements from description to dialogue should work to further the dramatic development of the connection between reader and character. Wood even believes that the emphasis on “round” characters is overstated, pointing out that many of the most memorable—Jean Brodie, Mr. Biswas—are one-dimensional and unchanging. Nowhere in the table of contents of How Fiction Works, or even in the index, does the word plot appear. He does, however, discuss plot in his response to postmodernists’ complaints about realism. Wood quotes author Rick Moody:

It’s quaint to say so, but the realistic novel still needs a kick in the ass. The genre, with its epiphanies, its rising action, its predictable movement, its conventional humanisms, can still entertain and move us on occasion, but for me it’s politically and philosophically dubious and often dull. Therefore, it needs a kick in the ass.

Moody is known for a type of fiction that intentionally sabotages the sacred communion Wood sees as essential to the experience of reading fiction. He begins his response by unpacking some of the claims in Moody’s fussy pronouncement:

Moody’s three sentences efficiently compact the reigning assumptions. Realism is a “genre” (rather than, say, a central impulse in fiction-making); it is taken to be mere dead convention, and to be related to a certain kind of traditional plot, with predictable beginnings and endings; it deals in “round” characters, but softly and piously (“conventional humanisms”); it assumes that the world can be described, with a naively stable link between word and referent (“philosophically dubious”); and all this will tend toward a conservative or even oppressive politics (“politically… dubious”).

Wood begins the section following this analysis with a one-sentence paragraph: “This is all more or less nonsense” (224-5) (thus winning my devoted readership).
Ben Lerner
            That “more or less” refers to Wood’s own frustrations with modern fiction. Conventions, he concedes, tend toward ossification, though a trope’s status as a trope, he maintains, doesn’t make it untrue. “I love you,” is the most clichéd sentence in English. That doesn’t nullify the experience of falling in love (236). Wood does believe, however, that realistic fiction is too eventful to live up to the label. Reviewing Ben Lerner’s exquisite short novel Leaving the Atocha Station, Wood lavishes praise on the postmodernist poet’s first work of fiction. He writes of the author and his main character Adam Gordon,

Lerner is attempting to capture something that most conventional novels, with their cumbersome caravans of plot and scene and "conflict," fail to do: the drift of thought, the unmomentous passage of undramatic life. Several times in the book, he describes this as "that other thing, the sound-absorbent screen, life’s white machine, shadows massing in the middle distance… the texture of et cetera itself." Reading Tolstoy, Adam reflects that even that great master of the texture of et cetera itself was too dramatic, too tidy, too momentous: "Not the little miracles and luminous branching injuries, but the other thing, whatever it was, was life, and was falsified by any way of talking or writing or thinking that emphasized sharply localized occurrences in time." (98)

Wood is suspicious of plot, and even of those epiphanies whereby characters are rendered dynamic or three-dimensional or “round,” because he seeks in fiction new ways of seeing the world he inhabits according to how it might be seen by lyrically gifted fellow inhabitants. Those “cumbersome caravans of plot and scene and ‘conflict’" tend to be implausible distractions,
forcing the communion into narrow confessionals, breaking the spell.

            As a critic who has garnered wide acclaim from august corners conferring a modicum of actual authority, and one who's achieved something quite rare for public intellectuals, a popular following, Wood is (too) often criticized for his narrow aestheticism. Once he closes the door on goofy postmodern gimcrack, it remains closed to other potentially relevant, potentially illuminating cultural considerations—or so his detractors maintain. That popular following of his is, however, comprised of a small subset of fiction readers. And the disconnect between consumers of popular fiction and the more literary New Yorker subscribers speaks not just to the cultural issue of declining literacy or growing apathy toward fictional writing but to the more fundamental question of why people seek out narratives, along with the question Wood proposes to address in the title of his book, how does fiction work?  

            While Wood communes with synesthetic flaneurs, many readers are looking to have their curiosity piqued, their questing childhood adventurousness revived, their romantic and nightmare imaginings played out before them. “If you look at the best of literary fiction," Benjamin Percy said in an interview with Joe Fassler,

you see three-dimensional characters, you see exquisite sentences, you see glowing metaphors. But if you look at the worst of literary fiction, you see that nothing happens. Somebody takes a sip of tea, looks out the window at a bank of roiling clouds and has an epiphany.

The scene Percy describes is even more eventful than what Lerner describes as “life’s white machine”—it features one of those damn epiphanies. But Percy is frustrated with heavy-handed plots too.

In the worst of genre fiction, you see hollow characters, you see transparent prose, you see the same themes and archetypes occurring from book to book. If you look at the best of genre fiction, you see this incredible desire to discover what happens next.
Benjamin Percy
The interview is part of Fessler’s post on the Atlantic website, “How Zombies and Superheroes Conquered Highbrow Fiction.” Percy is explaining the appeal of a new class of novel.

So what I'm trying to do is get back in touch with that time of my life when I was reading genre, and turning the pages so quickly they made a breeze on my face. I'm trying to take the best of what I've learned from literary fiction and apply it to the best of genre fiction, to make a kind of hybridized animal.

Is it possible to balance the two impulses: the urge to represent and defamiliarize, to commune, on the one hand, and the urge to create and experience suspense on the other? Obviously, if the theme you’re taking on is the struggle with boredom or the meaningless wash of time—white machine reminds me of a washer—then an incident-rich plot can only be ironic.
Ian McEwan
            The solution to the conundrum is that no life is without incident. Fiction’s subject has always been births, deaths, comings-of-age, marriages, battles. I’d imagine Wood himself is often in the mood for something other than idle reflection. Ian McEwan, whose Atonement provides Wood an illustrative example of how narration brilliantly captures character, is often taken to task for overplotting his novels. Citing Henry James in a New Yorker interview with Daniel Zalewski to the effect that novels have an obligation to “be interesting,” McEwan admits finding “most novels incredibly boring. It’s amazing how the form endures. Not being boring is quite a challenge.” And if he thinks most novels are boring he should definitely stay away from the short fiction that gets published in the New Yorker nowadays.
Scene from Atonement that never took place

A further implication of Wood’s observation about narration’s capacity for connecting reader to character is that characters who live eventful lives should inhabit eventful narratives. This shifts the issue of plot back to the issue of character, so the question is not what types of things should or shouldn’t happen in fiction but rather what type of characters do we want to read about? And there’s no question that literary fiction over the last century has been dominated by a bunch of passive losers, men and women flailing desperately about before succumbing to societal or biological forces. In commercial fiction, the protagonists beat the odds; in literature, the odds beat the protagonists.

There’s a philosophy at play in this dynamic. Heroes are thought to lend themselves to a certain view of the world, where overcoming sickness and poverty and cultural impoverishment is more of a rite of passage than a real gauge of how intractable those impediments are for nearly everyone who faces them. If audiences are exposed to too many tales of heroism, then hardship becomes a prop in personal development. Characters overcoming difficulties trivializes those difficulties. Winston Smith can’t escape O’Brien and Room 101 or readers won’t appreciate the true threat posed by Big Brother. The problem is that the ascent of the passive loser and the fiction of acquiescence don’t exactly inspire reform-minded action either.

Adam Gordon, the narrator of Leaving the Atocha Station, is definitely a loser. He worries all day that he’s some kind of impostor. He’s whiny and wracked with self-doubt. But even he doesn’t sit around doing nothing. The novel is about his trip to Spain. He pursues women with mixed success. He does readings of his poetry. He witnesses a terrorist attack. And these activities and events are interesting, as James insisted they must be. Capturing the feel of uneventful passages of time may be a worthy literary ambition, but most people seek out fiction to break up periods of nothingness. It’s never the case in real life that nothing is happening anyway—we’re at every instance getting older. I for one don’t find the prospect of spending time with people or characters who just sit passively by as that happens all that appealing.
In a remarkably lame failure of a lampoon in Harper's Colson Whitehead targets Wood's enthusiasm for Saul Bellow. And Bellow was indeed one of those impossibly good writers who could describe eating Corn Flakes and make it profound and amusing. Still, I'm a little suspicious of anyone who claims to enjoy (though enjoyment shouldn't be the only measure of literary merit) reading about the Bellow characters who wander around Chicago as much as reading about Henderson wandering around Africa. 

  Henderson: I'm actually looking forward to the next opportunity I get to hang out with that crazy bastard.

Read "Can't Win for Losing: Why there are so many Losers in Literature and Why it has to Change"

Wednesday, March 14, 2012

New Yorker's Talk of the Town Goes Sci-Fi

Dept. of Neurotechnology

Undermin(d)ing Mortality

"Most people's first response," Michael Maytree tells me over lunch, "is, you know, of course I want to live forever." The topic of our conversation surprises me, as Maytree's fame hinges not on his longevity—as remarkable as his ninety-seven years makes him—but on his current status as record-holder for greatest proportion of manmade brain in any human. Maytree says according to his doctors his brain is around seventy percent prosthetic. (Most people with prosthetic brain parts bristle at the term "artificial," but Maytree enjoys the running joke of his wife's about any extraordinary aspect of his thinking apparatus being necessarily unreal.)

He goes on, "But then you have to ask yourself: Do I really want to live through the pain of grieving for people again and again? Is there enough to look forward to to make going on—and on and on—worthwhile?" He stops to take a long sip of his coffee while quickly scanning our fellow patrons in the diner on West 103rd. Only when his age is kept in mind does there seem anything unsettling about his sharp-eyed attunement. Within the spectrum of aging, Maytree could easily pass for a younger guy with poor skin resiliency.

"The question I find most troubling though is, will I, as I get really, really old, be able to experience things, particularly relationships, as…"—he rolls his right hand, still holding a forkful of couscous, as he searches for the mot juste—"as profoundly—or fulfillingly—as I did when I was younger." He smirks and adds, "Like when I was still in my eighties."

When we first sat down in the diner, I asked Maytree if he'd received much attention from anyone other than techies and fellow implantees. Aside from the never-ending cascade of questions posted on the MindFX website he helps run (, which serves as something of a support group listserv for people with brain prostheses and their families, and the requisite visits to research labs, including the one he receives medical care from, he gets noticed very little. The question about his brain he finds most interesting, he says, comes up frequently at the labs.

"I'd thought about it before I got the last implant," he said. "It struck me when Dr. Branson"—Maytree's chief neurosurgeon—"told me when it was done I'd have something like seventy percent brain replacement. Well, if my brain is already mostly mechanical, it shouldn't be that much of a stretch to transfer the part that isn't into some sort of durable medium—and, viola, my mind would become immortal."

It turned out the laboratory where Branson performed the surgery, the latest ("probably not the last," Maytree says) in a series of replacements and augmentations that began with a treatment for an injury he sustained in combat while serving in Iran and continued as he purchased shares in several biotech and neural implant businesses and watched their value soar, already had a division devoted to work on this very prospect. Though the work is being kept secret, it seems Maytree would be a likely subject if experimental procedures are in the offing. Hence my follow-up question: "Would you do it?"

"Think of a friend you've made recently," he enjoins me now, putting down his fork so he can gesticulate freely. "Now, is that friendship comparable—I mean emotion-wise—with friendships you began as a child? Sometimes I think there's no comparison; relationships in childhood are much deeper. Is it the same with every experience?" He rests his right elbow on the table next to his plate and leans in. "Or is the difference just a trick of memory? I honestly don't know."

(Another favorite question of Maytree's: Are you conscious? He says people usually add, or at least imply, "I mean, like me," to clarify. "I always ask then, 'Are you conscious—I mean like you were five years ago?' Naturally they can't remember.")

Finally, he leans back again, looks off into space shaking his head. "It's hard to think about without getting lost in the philosophical…" He trails off a moment before continuing dreamily, with downcast eyes and absent expression. "But it's important because you kind of need to know if the new experiences are going to be worth the passing on of the old ones." And that's the crux of the problem.

"Of course," he says turning back to me with a fraught grin, "it all boils down to what's going on in the brain anyway."

Dennis Junk

Tuesday, March 6, 2012

The Adaptive Appeal of Bad Boys

Image Courtesy of Why We Reason

Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark (jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth –order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.
Tom Sawyer
            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

Rochester disguised as a gypsy
In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish harm to innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.
(Watch Hamlin discussing the research in an interview from earlier today.)
And check out this video of the experiments.