Character-driven fiction, plot-driven fiction

Charles Baxter
Charles Baxter

Last year I wrote about dysfunctional narratives, a type of story that Charles Baxter first identified in the 1990s and which now seems overly prevalent today. He quoted a description of them by poet Marilynne Robinson, who also identified this type of narrative. She called it a “mean little myth”:

One is born and in passage through childhood suffers some grave harm. Subsequent good fortune is meaningless because of the injury, while subsequent misfortune is highly significant as the consequence of this injury. The work of one’s life is to discover and name the harm one has suffered.

In my post, I wrote about a “Cambrian explosion” of dysfunctional narratives in our culture since the 1990s, this sense that we’re being overwhelmed by them. They’re in our magazines and books, in our cinema, in our newspapers, and on social media. “Reading begins to be understood as a form of personal therapy or political action,” Baxter wrote, and his observation seems as acute today as it did back then.

Last year I offered a few explanations for what energized this explosion. Recently I thought of another reason to add to the list. It’s a concept repeated endlessly in creative writing classes and how-to guides on writing fiction, namely, character-driven fiction versus plot-driven fiction. Respectable authors are supposed to write character-driven fiction and to eschew plot-driven fiction, which is largely associated with genre fiction.

When I first heard this edict of character versus plot, I accepted it as sage wisdom, and sought to follow it closely. Over the years, I kept hearing it from instructors and successful writers, especially writers of so-called literary fiction. I heard it so much, I began to question it. What exactly is character? What is plot?

I began to pose these questions to my peers. Their response usually sounded like this:

“‘Character’ is all the things that make a character unique. ‘Plot’ is the stuff that happens in a story.” A character-driven story is supposedly rich with humanizing details, while a plot-driven piece is a fluffy story where “a lot of stuff happens.”

Aristotle is not the final word on literary analysis, but his opinions on how a story succeeds or fails is far more nuanced than what many of my peers and instructors in creative writing programs could offer.

Aristotle defines character as a set of human traits imitated in the text. Traits could be run-of-the-mill personality markers, such as a character who is studious or arrogant, or complex and contradictory, like Hamlet’s brooding and questioning nature. Before modern times, playwrights often used traits associated with the four humors to define characters in a play.

The four humors

For Aristotle, plot is the series of decisions a character makes that propels the story forward. These decisions generally take two forms: The character speaks, or the character acts. In line with the saying “actions speak louder than words,” Aristotle holds that a character’s actions are more significant, and more revealing, than the words they mouth.

When one of the salesmen in Glengarry Glen Ross announces he’s going close a big sale that night, and then crosses the street to have a cocktail, his actions reveal the hollowness of his words. Both decisions (speaking and acting) are also plot. Plot proves what character traits merely suggest.1

In other words, plot is not “stuff that happens.” (Note the passive voice, as though plot elements are forced upon the characters.) Rather, plot is a sequence of decisions made—and readers are very interested in a character’s decisions.

To be fair, inaction by a character is a kind of decision. Certainly there’s room for stories about characters who ponder a great deal and do little about it. In successful fiction, though, the final effect of inaction is almost always ironic. (Two good examples are Richard Ford’s “Rock Springs” and Thurber’s “The Secret Life of Walter Mitty.”) The problem is when inaction in literary fiction is treated as sublime.

The inaccurate, watered-down definition of plot-driven fiction—”A story where a lot of stuff happens”—has led to contemporary American literature’s fascination with flabby, low-energy narratives. I’ve met authors proud that the characters in their stories don’t do anything—never get off the couch, never pick up the phone, never make a decision of any consequence. Literary fiction has come to regard passivity as a virtue and action as a vice. A writer crafting a character who takes matters into their own hands risks having their work classified as genre fiction.

For decades now, creative writing programs have been pushing an aesthetic emphasizing character traits over character decisions. It’s frustrating to watch, year after year, the primacy of character-driven fiction getting pushed on young writers, with too many of them accepting the mantra without further consideration.

And this is why I think the Cambrian explosion of dysfunctional narratives is tied to this obsession with character-driven fiction. Passivity and inactivity are keystones of Baxter’s dysfunctional narratives. In his essay, he notes the trend toward “me” stories (“the protagonists…are central characters to whom things happen”) over “I” stories (“the protagonist makes certain decisions and takes some responsibility for them”).

This is why I’m wary of character-driven writers who do not permit their protagonists to make mistakes, instead strategically devising stories where they make no mistakes, and are therefore blameless. No wonder plot—that is, decision-making—is being eschewed, when this is the kind of story being upheld and praised.

  1. Aristotle’s Poetics are obviously far more complicated than my three-paragraph summary, but the gist described here holds. ↩︎

Rethinking realism

Close-up of man's face from "The Arnolfini Portrait" by Jan van Eyck

Not rethinking realism, as in rethinking philosophy’s single, objective reality, hard as rocks and nails. No, I mean rethinking realism in the sense of questioning the elevation of literary realism over the many other forms of fiction.

Realism has long been the go-to form in literature for telling a story a certain way. An entire literary style—Naturalism—sprung from the sense that Romanticism had gone too far and produced a literature divorced from the world as commonly experienced. The pendulum later shifted the other direction, and for a period of time realistic literature was derided as bourgeois and reactionary. Since World War II, with the rise of creative writing programs and a reinvigorated enforcement of upper-class distinctions, kitchen-table realism has returned to the pinnacle of literary loftiness in America.

So it’s funny to me that realism is also so important in popular entertainment. This is nowhere as true as with television, which is obsessed with depicting reality—from the “you are there”-style news reporting to game shows branded as “reality TV.” When the writers of TV’s M*A*S*H killed off Col. Henry Blake in a season finale, they were inundated with letters from outraged viewers. The Emmy award-winning writing team’s response was, “Well, that’s reality.” American auteur Robert Altman famously ends Nashville with an out-of-the-blue assassination of a central character. Why? Because, he explained, that’s reality.

It’s not that these plot points are faulty or wrong-headed. My complaint is that the excuse—”It’s reality”—is a lazy defense of artistic choices. Writers should cop to their decision rather than take the passive route and saying reality made the choice for them. Writers should ask themselves if a “realistic” moment is adding to, or subtracting from, the story.

Anyone who’s attended a creative writing class, workshop, or MFA program is familiar with the high ground presumed by realism. The trendy term is “psychologically realistic fiction.” In writing programs, names like Raymond Carver, Amy Hempel, Tobias Wolff, and Tim O’Brien are tossed out as the zenith of American writing. Students are explicitly encouraged to emulate them, and their importance is implicitly signaled by their repeated presence in syllabi and required-reading lists. (I’ve read “The Things They Carried” at least eight times over the course of decades of writing groups and classes.) These authors are lionized for many reasons, but importantly, they all wrote about reality.

(There are two exceptions worth mentioning: One is magical realism, although its high regard in writing programs is tied up with identity politics. The other is Borges, whom I jokingly refer to as science-fiction for MFA students. It must be noted that both exceptions originate from outside the United States. Kafka, incidentally, is read and praised in writing programs as well, but not in such a way as to encourage emulation—I suspect my instructors liked the idea of Kafka more than Kafka’s output.)

Look at how so much literary fiction operates. Protagonists tend to be thoughtful, rational, and deliberative—often, they exhibit little to no affect. Characters in opposition tend to be boorish, thoughtless, and emotional. Dialogue is either flat and unadorned, or snappy, like the patter of a stand-up comic. Scenes flow as one character uttering a brief line, followed by paragraphs of rumination. The other character responds, and more paragraphs of rumination.

The prose might be good—it might even be inspired—but is this realism? Going through contemporary literary magazines, reading one story after another, I’m not sure one will find a lot of psychological realism, in the sense of psychiatry’s DSM-5.

Genre fiction is not immune either. Too often connoisseurs of hard-boiled detective fiction and tough-guy novels claim their favorite authors are superior because of their attention to realism. Raymond Chandler’s “The Simple Art of Murder” is wonderful and insightful criticism, but at its heart is a trashing of the classic British mystery because “fiction in any form has always intended to be realistic.” It’s one of the few arguments in the essay that I question.

Janet Burroway wrote, “Sometimes reality doesn’t make for good fiction.” It’s a tough lesson to learn, and one that even seasoned writers fail to grasp.

After all, there is no widely-accepted maxim stating the primary purpose of story is to reproduce reality. Fiction is supposed to be an expression of a writer’s inner state, not a dry report of the who, what, where, and when. Besides, why do we need to reproduce reality with such fidelity? We’re soaking in it. If you want reality, put down your phone or leave your computer screen. You have returned to reality, effortlessly.

In a writing class I attended, one of the students was a fan of horror, particularly H. P. Lovecraft and Robert Chambers’ The King in Yellow. At an end-of-semester presentation before the class, he expressed frustration at the hard-realism reading list we’d been given, and of the months of instruction requiring him to write in similar form. “Reading about reality is like reading about your job on your day off,” he told us. There’s something to that.

Story creates a transcendence within the reader. This transcendence defies reality while mimicking it—reality is Play-Doh in the hands of an adept writer. From hard realism to squishy-soft fantasy and everything in-between, great writing takes me to another place and time, a chance to live another person’s life. Books are “portable dreamweavers.”

Has the digital revolution killed fiction?

Obituary billboard
by Elliot Brown (CC BY-ND 2.0)

Will Blythe at Esquire asks, “In the golden age of magazines, short stories reigned supreme. Has the digital revolution killed their cultural relevance?”

Wearily, I started his essay expecting more of the same, and lo, finding it: Computers and the Internet, he contends, has done much to destroy literary fiction. By this point, I’m surprised any writer pursuing such a thesis would bother fortifying their argument with examples or statistics. Blythe does not fail on that count either: Other than some “c’mon, look around, you know what I’m saying,” the argument is made sans corroborative evidence. Of course the Internet has wrecked American literature. Why bother denying it?

It’s telling, then, that Blythe opens with the usual barrage of accusations about digital distractions—”Can you read anything at all from start to finish, i.e. an essay or a short story, without your mind being sliced apart by some digital switchblade?”—and then, to prove how things used to be so much better way back when, he segues to life as an Esquire editor in the 1980s and 90s:

[Rust Hill] and I would occasionally drink two or three Negronis at lunch, sometimes at the New York Delicatessen on 57th Street, and talk about the writers and novels and short stories we loved (and hated). … Then he and I would happily weave our way back to the office at 1790 Broadway, plop down in our cubicles and make enthusiastic phone calls to writers and agents, our voices probably a little louder than usual.

The jokes about fiction editors at a national magazine choosing stories to publish after a three-cocktail lunch write themselves, so I won’t bother. (Although I should, since, as an early writer, I had high hopes for placing a short story with a publication like Esquire. Perhaps I should have mailed a bottle of Bombay with each of my submissions.)

The dichotomy Blythe illustrates is telling: The hellish “after” is the mob writing Amazon user reviews and him not knowing how to turn off iPhone notifications; the blissful “before” is editorial cocktail lunches and not having to give a rat’s ass what anyone else thinks.

One counterpoint to Blythe’s thesis: The 1980s had plenty of distractions, including the now-obvious inability to silence your telephone without taking it off the hook. Another counterpoint: If you want to drink Negronis and argue literature over Reubens, well, you can do that today too. A third counterpoint: A short story printed in the pages of Esquire was sandwiched between glossy full-color ads for sports cars, tobacco, and liquor—most featuring leggy models in evening gowns or swimsuits. Distractions abounded, even before the Internet.

But none of these are what Blythe is really talking about. What he bemoans is the diffusion of editorial power over the past twenty years.


Blythe throws a curveball—a predictable curveball—after his reminisces about Negronis and schmears. Sure, computers are to blame for everything, but the real crime is that computers now permit readers to make their opinions on fiction known:

Writers and writing tend to be voted upon by readers, who inflict economic power (buy or kill the novel!) rather than deeply examining work the way passionate critics once did in newspapers and magazines. Their “likes” and “dislikes” make for massive rejoinders rather than critical insight. It’s actually a kind of bland politics, as if books and stories are to be elected or defeated. Everyone is apparently a numerical critic now, though not necessarily an astute one.

I don’t actually believe Blythe has done a thorough job surveying the digital landscape to consider the assortment and quality of reader reviews out there. There are, in fact, a plenitude of readers penning worthy critical insight over fiction. Just as there are so many great writers out there that deserve wider audiences, there also exist critical readers who should be trumpeted farther afield.

Setting that aside, I still happily defend readers content to note a simple up/down vote as their estimation of a book. Not every expression of having read a book demands an in-depth 8,000 word essay on the plight of the modern Citizen of the World.

Rather, I believe Blythe—as with so many others in the literary establishment—cannot accept readers could have any worthwhile expressible opinion about fiction. The world was so much easier when editors at glossy magazines issued the final word on what constituted good fiction and what was a dud. See also a book I’m certain Blythe detests, A Reader’s Manifesto, which tears apart—almost point by point—Blythe’s gripes.

Cover of A Reader's Manifesto by B.R. Myers

When B. R Myers’ Manifesto was published twenty years ago, a major criticism of it was that Myers was tilting at windmills—that the literary establishment was not as snobbish and elitist as he described. Yet here Blythe is practically copping to the charges.

Thus the inanity of him complaining that today’s readers hold the power to “inflict economic power” when, apparently, such power should reside solely with critics and magazine editors. I don’t even want to argue this point; this idea is a retrograde understanding of how the world should work. This is why golden age thinking is so pernicious—since things used to be this way, it was the best way. Except when it’s not.

Of course the world was easier for the editors of national slicks fifty years ago, just as life used to be good for book publishers, major news broadcasters, and the rest of the national media. It was also deeply unsatisfying if one were not standing near the top of those heaps. It does not take much scratching in the dirt to understand the motivations of the counterculture and punk movements in producing their own criticism. The only other option back then was to bow to the opinions of a klatch of New York City editors and critics whose ascendancy was even more opaque than the bishops of the Holy See.

That said, it’s good to see a former Esquire editor praise the fiction output of magazines that, not so long ago, editors at that level were expected to sneer down upon: Publications such as Redbook, McCall’s, Analog, and Asimov’s Science Fiction all get an approving nod from Blythe.

But to cling to the assertion that in mid-century America “short fiction was a viable business, for publishers and writers alike” is golden age-ism at its worst. Sure, a few writers could make a go at it, but in this case the exceptions do not prove the rule. The vast sea of short story writers in America had to settle for—and continue to settle for—being published in obscure literary magazines and paid in free copies.

No less than Arthur Miller opined that the golden age of American theater arced in his own lifetime. Pianist Bill Evans remarked he was blessed to have experienced the tail end of jazz’s golden age in America before rock ‘n’ roll sucked all the oxygen out of the room. Neither of those artistic golden ages perished because of the Internet.

What caused them to die? That’s complicated, sure, but their demise—or, at least, rapid descents—were preceded by a turn toward the avant-garde. Which is to say, it became fashionable for jazz and theater to distance themselves from their audience under the guise of moving the art forward. The only moving that happened, though, was the audience for the exits.


Blythe then turns his attention to a third gripe in his meandering essay. Without a shred of evidence, he argues that the digital revolution of the last twenty-five years metastasized into a cultural Puritanism in today’s publishing world:

Perhaps because of online mass condemnations, there’s simply too much of an ethical demand in fiction from fearful editors and “sensitivity readers,” whose sensitivity is not unlike that of children raised in religious families… Too many authors and editors fear that they might write or publish something that to them, at least, is unknowingly “wrong,” narratives that will reveal their ethical ignorance, much to their shame. It’s as if etiquette has become ethics, and blasphemy a sin of secularity.

I cannot deny that there appears to be a correlation between the rise of the Internet in our daily lives and the shift over the last decade to cancel or ban “problematic” literature. What I fail to see is how pop-up alerts or a proliferation of Wi-Fi hot spots is to blame for this situation.

If Blythe were to peer backwards once more to his golden age of gin-soaked lunches, he would recall a nascent cultural phenomenon called “political correctness.” P.C. was the Ur-movement to today’s sensitivity readers and skittish editors. Social media whipped political correctness’ protestations into a hot froth of virtuous umbrage—a video game of oneupsmanship in political consciousness, where high scores are tallied with likes and follower counts. Using social media as leverage to block books from publication was the logical next step. But blaming computers for this situation is like blaming neutrons for the atom bomb.


After a dozen paragraphs of shaking my head at Blythe’s litany of complaints, I was pleasantly surprised to find myself in agreement with him:

The power of literary fiction—good literary fiction, anyway—does not come from moral rectitude. … Good literature investigates morality. It stares unrelentingly at the behavior of its characters without requiring righteousness.

At the risk of broken-record syndrome, I’ll repeat my claim that Charles Baxter’s “Dysfunctional Narratives” (penned twenty-five years ago, near the beginning of the Internet revolution) quietly predicted the situation Blythe is griping about today. Back then, Baxter noticed the earliest stirrings of a type of fiction where “characters are not often permitted to make intelligent and interesting mistakes and then to acknowledge them. … If fictional characters do make such mistakes, they’re judged immediately and without appeal.” He noted that reading had begun “to be understood as a form of personal therapy or political action,” and that this type of fiction was “pre-moralized.”

"Burning Down the House" by Charles Baxter

Unlike Blythe, Baxter did not fret that literary fiction would perish. Baxter was a creative writing instructor at a thriving Midwestern MFA program. He knew damn well that writing literary fiction was a growth industry, and in no danger of extinction. What concerned him was how much of this fiction was (and is) “me” fiction, that is, centered around passive protagonists suffering through some wrong. He noticed a dearth of “I” fiction with active protagonists who make decisions and face consequences.

As Blythe writes:

Too many publishers and editors these days seem to regard themselves as secular priests, dictating right and wrong, as opposed to focusing on the allure of the mystifying and the excitement of uncertainty. Ethics and aesthetics appear in this era to be intentionally merged, as if their respective “good” is identical.

If Blythe is going to roll his eyes at the glut of reader-led cancellations and moralizing editors, perhaps he could consider another glut in the literary world: The flood of the literary memoir, with its “searing” psychic wounds placed under microscope, and its inevitably featherweight closing epiphany. These testaments of self-actualization may be shelved under nonfiction, but they are decidedly fictional in construction. In the literary world, stories of imagination and projection have been superseded by stories of repurposed memory, whose critical defense is, invariably, “But this really happened.”

It was not always so. Memoir was once synonymous with popular fiction. Autobiography was reserved for celebrities such as Howard Cosell and Shirley MacLaine, or a controversial individual who found themself in the nation’s spotlight for a brief moment. It was not treated as a high art form, and perceived in some quarters as self-indulgent. No more.

There remains an audience for great fiction. Readers know when they’re being talked down to. They know the difference between a clueless author being crass and a thoughtful author being brutally honest. They also know the difference between a ripping yarn and a pre-moralized story they’re “supposed” to read, like eating one’s vegetables.

The death of literary fiction—especially the short story—will not be due to iPhone notifications and social media cancellations. Perhaps the problem Blythe senses is the loss of a mission to nurture and promote great fiction. The literary world has turned inward and grown insular. Its priorities are so skewed, I’ve witnessed literary writers question if fiction can even be judged or critiqued. The worsening relationship of class to literary fiction should not be overlooked, either.

If Blythe laments Asimov’s Science Fiction, perhaps he should check out the thriving Clarkesworld. Substacks of regular short fiction are regularly delivering work to thousands of readers. I don’t know if these publications’ editors are gulping down Negronis during their daily Zoom meetings—but as long as they’re putting out quality fiction that challenges and questions and enlightens, maybe that doesn’t matter, and never did.

Charles Baxter’s dysfunctional narratives

Charles Baxter
Charles Baxter

What if I told you that there’s been a sea-change in American storytelling over the past half-century? Not merely a change in subject matter, but that the fundamental nature of American narratives radically shifted? Would you believe me?

Now, what if I told you that a writer twenty-five years ago described these “new” stories, and even predicted they would become the dominant mode in our future? Would you believe that?

In 1997, Charles Baxter published Burning Down the House, a collection of essays on the state of American literature. It opens with “Dysfunctional Narratives: or, ‘Mistakes were Made,’” a blistering piece of criticism that not only detailed the kinds of stories he was reading back then, but predicted the types of stories we read and tell each other today.

Baxter appropriated the term “dysfunctional narrative” from poet C. K. Williams, but he expounded and expanded upon it so much, it’s fair to say he’s made the term his own. He borrowed a working definition of dysfunctional narratives from poet Marilynne Robinson, who described this modern mode of writing as a “mean little myth:”

One is born and in passage through childhood suffers some grave harm. Subsequent good fortune is meaningless because of the injury, while subsequent misfortune is highly significant as the consequence of this injury. The work of one’s life is to discover and name the harm one has suffered.

Baxter adds that the source of this injury “can never be expunged.” As for the ultimate meaning of these stories: “The injury is the meaning.”

To claim this mode of writing has become the dominant one in American culture demands proof, or at least some supporting evidence. Baxter lists examples, such as Richard Nixon’s passive-voice gloss over the Watergate cover-up (“mistakes were made”), Jane Smiley’s A Thousand Acres, and conspiracy theories, among others.

“Dysfunctional Narratives” doesn’t succeed by tallying a score, however. Rather, it describes a type of story that sounds all-too-familiar to modern ears:

Reading begins to be understood as a form of personal therapy or political action. In such an atmosphere, already moralized stories are more comforting than stories in which characters are making complex or unwitting mistakes.

Don’t merely consider Baxter’s descriptions in terms of books. News stories, the social media posts scrolling up your daily feed, even the way your best friend goes into how their boss has slighted them at work—all constitute narratives, small or large. Dysfunctional narratives read as if the storyteller’s thumb is heavy on the moral scale—they feel rigged.

It does seem curious that in contemporary America—a place of considerable good fortune and privilege—one of the most favored narrative modes from high to low has to do with disavowals, passivity, and the disarmed protagonist.

(I could go one quoting Baxter’s essay—he’s a quotable essayist—but you should go out and read all of Burning Down the House instead. It’s that good.)

Dysfunctional narratives are a literature of avoidance, a strategic weaving of talking points and selective omissions to block counter-criticism. If that sounds like so much political maneuvering, that’s because it is.

“Mistakes were made”

Let’s start with what dysfunctional narratives are not: They’re not merely stories about dysfunction, as in dysfunctional families, or learning dysfunctions. Yes, a dysfunctional narrative may feature such topics, but that is not what makes it dysfunctional. It describes how the story is told, the strategies and choices the author had made to tell their story.

Baxter points to Richard Nixon’s “mistakes were made” as the kernel for the dysfunctional narrative in modern America. (He calls Nixon “the spiritual godfather of the contemporary disavowal movement.”) He also holds up conspiracy theories as prototypes:

No one really knows who’s responsible for [the JFK assassination]. One of the signs of a dysfunctional narrative is that we cannot leave it behind, and we cannot put it to rest, because it does not, finally, give us the explanations we need to enclose it. We don’t know who the agent of action is. We don’t even know why it was done.

Recall the tagline for The X-Files, a TV show about the investigation of conspiracy theories: “The truth is out there.” In other words, the show’s stories can’t provide the truth—it’s elsewhere.

More memorably—and more controversially—Baxter also turns his gaze upon Jane Smiley’s A Thousand Acres, which features the use of recovered memories (“not so much out of Zola as Geraldo“) and grows into “an account of conspiracy and memory, sorrow and depression, in which several of the major characters are acting out rather than acting, and doing their best to find someone to blame.”

In a similar vein, a nearly-dysfunctional story would be The Prince of Tides by Pat Conroy. It centers on a family man who, via therapy, digs through memories of a childhood trauma which has paralyzed him emotionally as an adult. He gradually heals, and goes on to repair his relationship with his family. Notably, his elderly father does not remember abusing him years earlier, leaving one wound unhealed.

Another example would be Nathanael West‘s A Cool Million, which follows a clueless naif on a cross-American journey as he’s swindled, robbed, mugged, and framed. By the end, the inventory of body parts he’s lost is like counting the change in your pocket. It might be forgiven as a satire of the American dream, but A Cool Million remains a heavy-handed tale.

This leads to another point: A dysfunctional narrative is not necessarily a poorly told one. The dysfunction is not in the quality of the telling, but something more innate.

Examples of more topical dysfunctional narratives could be the story of Aziz Ansari’s first-date accuser. The complaints of just about any politician or pundit who claims they’ve been victimized or deplatformed by their opponents is dysfunctional. In almost every case, the stories feature a faultless, passive protagonist being traumatized by the more powerful or the abstract.

There’s one more point about dysfunctional narratives worth making: The problem is not that dysfunctional narratives exist. The problem is the sheer volume of them in our culture, the sense that we’re being flooded—overwhelmed, even—by their numbers. That’s what seems to concern Baxter. It certainly concerns me.

A literature of avoidance

In his essay Ur-Fascism, Umberto Eco offers this diagram:

onetwothreefour
abcbcdcdedef

Each column represents a political group or ideology, all distinct, yet possessing many common traits. (Think of different flavors of Communism, or various factions within a political party.) Groups one and two have traits b and c in common, groups two and four have trait d in common, and so on.

Eco points out that “owing to the uninterrupted series of decreasing similarities between one and four, there remains, by a sort of illusory transitivity, a family resemblance between four and one,” even though they do not share any traits. The traits form a chain—there is a common “smell” between the political groups.

Not all dysfunctional narratives are exactly alike, or have the exact traits as the rest, but they do have a common “smell.” Even if a 9/11 conspiracy theory seems utterly unlike A Cool Million, they both may be dysfunctional.

"Burning Down the House" by Charles Baxter

Likewise, in the traits that follow, just because a story doesn’t include all doesn’t mean it “avoids dysfunction.” Rather, dysfunctional narratives are built by the storyteller selecting the bricks they need to buttress their message:

  • A disarmed protagonist
  • An absent antagonist
  • Minimal secondary characters
  • An authorial thumb on the scale
  • “Pre-moralized”
  • A vaporous conclusion
  • Authorial infallibility and restricted interpretations

The most common trait of the dysfunctional narrative is a faultless, passive main character. Baxter calls this the “disarmed protagonist.” Baxter differentiates between “I” stories (“the protagonist makes certain decisions and takes some responsibility for them”) and “me” stories (“the protagonists…are central characters to whom things happen”). Dysfunctional narratives are the “me” stories.

And the errors these “me” characters make—if any—are forgivable, understanding, or forced upon them by dire circumstances. Compare this to the mistakes the people around them make—monstrous, unpardonable sins:

…characters [in stories] are not often permitted to make interesting and intelligent mistakes and then to acknowledge them. The whole idea of the “intelligent mistake,” the importance of the mistake made on impulse, has gone out the window. Or, if fictional characters do make such mistakes, they’re judged immediately and without appeal.

Power dynamics are a cornerstone of all narratives, but one “smell” of the dysfunctional variety is an extraordinary tilting of power against the main character. The system, or even the world, is allied against the protagonist. Close reads of these narratives reveals an authorial thumb on the story’s moral scale, an intuition that the situation has been soured a bit too much in the service of making a point. This scale-tipping may be achieved many ways, but often it requires a surgical omission of detail.

Hence how often in dysfunctional narratives the antagonist is absent. A crime in a dysfunctional novel doesn’t require a criminal. All it needs, in Robinson’s words, is for the main character to have endured some great wrong: “The work of one’s life is to discover and name the harm one has suffered.”

Poet Marilynne Robinson
Poet Marilynne Robinson

Name the harm, not the perpetrator. Why not the perpetrator? Because often there’s no person to name. The harm is a trauma or a memory. The perpetrator may have disappeared long ago, or died, or have utterly forgotten the wrongs they inflicted (as the father does in Prince of Tides). The malefactor may be an abstraction, like capitalism or sexism. But naming an abstraction as the villain does not name anything. It’s like naming narcissism as the cause of an airliner crash. This is by design. Abstractions and missing antagonists don’t have a voice. Even Satan gets to plead his case in Paradise Lost.

No ending is reached in a dysfunctional narrative, because there’s only a trauma, or a memory, or an abstraction to work against. These injuries never heal. Memories may fade, but the past is concrete. By telling the story, the trauma is now recorded and notarized like a deed. “There’s the typical story in which no one is responsible for anything,” Baxter complained in 2012. “Shit happens, that’s all. It’s all about fate, or something. I hate stories like that.” These stories trail off at the end, employing imagery like setting suns or echoes fading off to signify a story that will never conclude.

The most surface criticism of these narratives is that we, the readers, sense we’re being talked down to by the author. “In the absence of any clear moral vision, we get moralizing instead,” Baxter writes. A dysfunctional narrative dog-whistles its morality, and those who cannot decode the whistle are faulted for it. The stories are pre-moralized: The reader is expected to understand beforehand the entirety of the story’s moral universe. For a reader to admit otherwise, or to argue an alternate interpretation, is to risk personal embarrassment or confrontation from those who will not brook dissent.

And making the reader uncomfortable is often the outright goal of the dysfunctional narrative. The writer is the presumed authority; the reader, the presumed student. It’s a retrograde posture, a nagging echo from a lesser-democratic time. (When I read A Brief History of Time, I was most certainly the student—but Hawking admirably never made me feel that way.) Dysfunctional narratives are often combative with the reader; they do not acknowledge the reader’s right to negotiate or question the message. With dysfunctional narratives, it’s difficult to discern if the writer is telling a story or digging a moat around their main character.

“What we have instead is not exactly drama and not exactly therapy,” Baxter writes. “No one is in a position to judge.” A dysfunctional narrative portrays a world with few to no alternatives. A functional narrative explores alternatives. (This is what I mean when I write of fiction as an experiment.)

This is why so many dysfunctional narratives are aligned to the writer’s biography—who can claim to be a better authority on your life, after all? But the moment a reader reads a story, its protagonist is no longer the author’s sole property. The character is now a shared construct. Their decisions may be questioned (hence the passive nature of the protagonists—inaction avoids such judgements). If the author introduces secondary characters, they can’t claim similar authority over them—every additional character is one more attack vector of criticism, a chipping away of absolute authority over the story itself. That’s what happened to sensitivity reader Kosoko Jackson in 2019, whose debut novel was pulped due to questions over his secondary characters.

Of all the traits listed—from the disarmed protagonist to the vaporous conclusion—the trait I find the “smelliest” is authorial infallibility and restricted interpretation. That’s why I used weasel language when I called Prince of Tides “nearly-dysfunctional:” The book is most certainly open to interpretation and questioning. In contrast, questioning a conspiracy theory could get you labeled an unwitting dupe, a useful idiot, or worse.

A Cambrian explosion

What Baxter doesn’t explore fully is why we’ve had this Cambrian explosion of dysfunctional narratives. He speculates a couple of possibilities, such as them coming down to us from our political leadership (like Moses carrying down the stone tablets), or as the byproduct of consumerism. I find myself at my most skeptical when his essay stumbles down these side roads.

When Baxter claims these stories arose out of “groups in our time [feeling] confused or powerless…in such a consumerist climate, the perplexed and unhappy don’t know what their lives are telling them,” it seems Baxter is offering a dysfunctional narrative to explain the existence of dysfunctional narratives. He claims these dysfunctional stories are produced by people of “irregular employment and mounting debts.” I strongly doubt this as well. In my experience, this type of folk are not the dominant producers of such narratives. Rather, these are the people who turn to stories for escape and uplift…the very comforts dysfunctional narratives cannot provide, and are not intended to provide.

Rather than point the finger at dead presidents or capitalism, I’m more inclined to ascribe the shift to a handful of changes in our culture.

The term “The Program Era” comes from a book by the same name detailing the postwar rise and influence of creative writing programs in the United States. This democratization of creative writing programs was not as democratic as once hoped, but it still led to a sharp increase in the numbers of people writing fiction. Most of those students were drawn from America’s upwardly-striving classes. And, as part of the workshop method used in these programs, it also led to a rise in those people having to sit quietly and listen to their peers criticize their stories, sometimes demolishing them. (Charles Baxter was a creative writing professor and the head of a prominent writing program in the Midwest. Many of his examples in Burning Down the House come from manuscripts he read as an instructor.)

With the expansion of writing programs came a rise in aspiring writers scratching around for powerful subject matter. Topics like trauma and abuse are lodestones when seeking supercharged dramatic stakes. Naturally, these writers also drew from personal biography for easy access to subject matter.

Another reason related to the Program Era is the heavy-handed emphasis on character-driven fiction over plot-driven fiction. I explore this theory here.

Another reason is staring back at you: The World Wide Web has empowered the masses to tell their stories to a global audience. This has created a dynamic where everyone can be a reader, a writer, and a critic, and all at the same time.

The natural next step in the evolution of the above is for storytellers to strategize how best to defend their work—to remove any fault in the story’s armor, to buttress it with rearguards and fortifications. (This is different than working hard to produce a high-quality experience, which, in my view, is a better use of time.) And there’s been a shift in why we tell stories: Not necessarily to entertain or enrich, but as an act of therapy or grievance, or to collect “allies” in a climate where you’re either with me or against me. Inaction in fiction has come to be praised as a literary virtue. Stories with characters who take matters into their own hands often are derided as genre fiction.

Pick up a university literary magazine and read it from cover to cover. The “smell” of dysfunctional narratives is awfully similar to the smell of social media jeremiads.

These are not the kind of stories I want to read, but it’s becoming increasingly difficult to distance myself from them. Writers should strive to offer more than a list grievances, or perform acts of score-settling. If it’s too much to ask stories to explain, then certainly we can expect them to connect dots. Even if the main character does not grow by the last page, we should grow by then, if only a little.

Hell freezes over: Netflix adapts “White Noise”

White Noise promotional photo

While I’m mildly optimistic about the announced adaptation of Neuromancer to Apple TV+, I found myself…stunned? aghast? tickled?—when I heard Netflix has adapted Don DeLillo’s White Noise to its streaming service. As I wrote on Mastodon and Twitter:

White Noise is not the kind of book one associates with popular entertainment, nor its author as the kind of person to acquiesce to its adaptation.

This merely touches the surface of my reaction to Netflix’s latest project.

If you’re not familiar, the novel White Noise is a 1985 literary comedy about Jack Gladney, a “professor of Hitler studies,” and his nuclear family in a fictional Midwestern college town. The early chapters depict suburban life as one soaked in crass consumerism, commercialism, and the ubiquitous nature of mass media. Things go pear-shaped when a railroad car spill on the edge of town triggers an “airborne toxic event,” leading to an evacuation and the concomitant strain on the family unit.

Remember, this is branded a comedy. The comic thrust of White Noise comes from its supposedly scathing parodies of American middle-class life. Take the novel’s opening paragraphs, where Gladney observes the college’s students returning to campus in single file:

The roofs of the station wagons were loaded down with carefully secured suitcases full of light and heavy clothing; with boxes of blankets, boots and shoes, stationery and books, sheets, pillows, quilts; with rolled-up rugs and sleeping bags; with bicycles, skis, rucksacks, English and Western saddles, inflated rafts. As cars slowed to a crawl and stopped, students sprang out and raced to the rear doors to begin removing the objects inside; the stereo sets, radios, personal computers; small refrigerators and table ranges; the cartons of phonograph records and cassettes; the hairdryers and styling irons; the tennis rackets, soccer balls, hockey and lacrosse sticks, bows and arrows; the controlled substances, the birth control pills and devices; the junk food still in shopping bags — onion-and-garlic chips, nacho thins, peanut creme patties, Waffelos and Kabooms, fruit chews and toffee popcorn; the Dum-Dum pops, the Mystic mints.

You’re forgiven if you stopped reading halfway through and skipped down. You didn’t miss anything.

Critic B. R. Myers categorizes this manner of list-making as a symptom of “a tale of Life in Consumerland, full of heavy irony, trite musing about advertising and materialism, and long, long lists of consumer artifacts, all dedicated to the proposition that America is a wasteland of stupefied shoppers.” That’s pretty much what the first half of White Noise adds up to. There’s more of these dreary lists in the book, and plenty of tin-eared dialogue to boot, as evidenced in this exchange between Gladney and his wife:

“It’s not the station wagons I wanted to see. What are the people like? Do the women wear plaid skirts, cable-knit sweaters? Are the men in hacking jackets? What’s a hacking jacket?”

“They’ve grown comfortable with their money,” I said. “They genuinely believe they’re entitled to it. This conviction gives them a kind of rude health. They glow a little.”

“I have trouble imagining death at that income level,” she said.

“Maybe there is no death as we know it. Just documents changing hands.”

“Not that we don’t have a station wagon ourselves.”

“It’s small, it’s metallic gray, it has one whole rusted door.”

Or this moment—the most famous in the book—when Gladney’s school-aged daughter talks in her sleep:

She uttered two clearly audible words, familiar and elusive at the same time, words that seemed to have a ritual meaning, part of a verbal spell or ecstatic chant.

Toyota Celica.

A long moment passed before I realized this was the name of an automobile. The truth only amazed me more. The utterance was beautiful and mysterious, gold-shot with looming wonder. It was like the name of an ancient power in the sky, tablet-carved in cuneiform.

I suppose for a certain type of person, this is a scream, gold-shot and looming. I’m not that type of person.

It’s the phoniness of White Noise I can’t let go of. The excuse of “it’s a satire” does not forgive the writer from grasping and depicting the reality of a situation. The power of satire is to capture the genuine and turn its underbelly over to tickle it—to reveal its absurdities in both premise and execution. DeLillo never accomplishes this. Professors don’t inventory their students’ goods from afar; husbands don’t tell their wives that the station wagon has a junky door (when any wife would full-well know this); and if a daughter was repeating a car make and model in her sleep, no one would declare it a religious experience. The absurdity of White Noise is not the mindless consumers populating it, but that this novel somehow is considered a smart skewering of them.

Compare the above to George Carlin’s ridiculing of American materialism in his infamous “Stuff” sketch:

DeLillo’s range-finding jabs are timid compared to Carlin’s honed wit, from the basic observation that homes are just lockboxes for our precious objects, to the game-theoretic anguish of weighing which personal goods make the cut for an overnight excursion. He even indulges in his own Consumerland-like list (“Afrin 12-hour decongestant nasal spray”) that is far briefer, funnier, and better-curated than DeLillo’s weary catalogs. The laughs aren’t merely at Carlin’s on-stage antics, but in the gnawing sensation that we’re guilty of what he’s describing—and Carlin’s tacit admission that he’s guilty of it, too. Meanwhile, in White Noise, we’re supposed to be chortling at the mindlessness of our inferiors. DeLillo is othering America—for whose benefit? Why, Americans like him: Americans who deny their American-ness.

(In this sense, I suspect the Netflix adaptation will execute much like Adam McKay’s smug Don’t Look Up, a spoof also predicated on an America stupefied by cable television and fast food.)

It’s not merely the elitism that fails to connect. Gladney’s field of “Hitler studies” is never really fleshed out. It could have been a fascinating device (although it risked from page one falling into the trap of Godwin’s Law). As the book wears on, the Hitler studies thing feels like a gag DeLillo thought would reap comic gold, and only realized chapters in that the idea had run out of gas. The best he can do is have Gladney deliver a lecture comparing Hitler to Elvis Presley—there’s your Godwin’s Law at work. When Gladney admits he’s only recently learned German, you realize how thin the satire really is: This is not a real professor of Hitler studies.

When I say “Gladney is not a real professor of Hitler studies,” I don’t mean it in the same way that W. H. Auden said Shrike is not a real newspaper editor in Nathanael West’s Miss Lonelyhearts. Auden meant that Miss Lonelyhearts is not about newspapermen or journalism—the premise of a man taking a position as an advice columnist is merely a convenience to place the book’s heart-wrenching confessional letters into his hands. Gladney’s field is very much intended to satirize him and academia, but the joke is never explored and left unfulfilled. It becomes a shingle to hang around Gladney’s neck, doing precious little to inform his worldview or way of life.

The main course for White Noise, though, is the American bourgeoisie. The metaphysics of supermarkets are discussed by the book’s characters (always with a straight face). Death is discussed in excruciating abstractions and legalistic terms. The book concludes with Gladney looking out over a hazy dusk, the air thick with toxic chemicals, and admiring its beauty. (No—really.)

White Noise by Don DeLillo

What’s the problem with Netflix adapting the book? In truth, I don’t care much one way or the other. What stunned me—and motivated those posts on social media—is that White Noise was always intended to be a sharp poke in the eye for middle America, with plenty of scorn reserved for major corporations and the mass media.

In other words, White Noise satirizes the type of corporation that’s adapting it into a movie, mocks the people that corporation will be marketing the film at, and despises the corporation collecting its profits as the mindless mob watches on from the comfort of the sofas in their McMansions, with their living rooms, their family rooms, their bedrooms, their candy rooms, their office rooms, their great rooms.

Why do they have great rooms?

What is a great room?

Twenty Years Later: B. R. Myers, A Reader’s Manifesto

See the “Twenty Writers, Twenty Books” home page for more information on this series.


Twenty years ago this month, The Atlantic published a critical essay on the then-current state of American prose. As dry and dusty a topic that sounds—doubly so when published by an august New England monthly—the essay improbably became a cultural sensation, triggering op-eds in international newspapers, vitriolic letters-to-the-editor, and screechy denunciations from professional reviewers. Suddenly readers everywhere were debating—of all things—the modern novel.

Writer B. R. Myers unexpectedly touched a raw nerve in an America that was better-read than the literati believed possible. “A Reader’s Manifesto” dissected without mercy the work of such literary lights as Don DeLillo, Annie Proulx, Cormac McCarthy, Paul Auster, and David Guterson. Myers didn’t merely criticize their prose on terms of its grammar and diction. He attacked these writers on grounds of pretentiousness, and accused the literary establishment of abetting their ascendancy.

Charged stuff, but still very inside baseball. To rouse an impassioned response from readers over books like White Noise and Snow Falling on Cedars was a remarkable moment in American culture. It’s all the more notable a moment considering some of the above authors’ books satirize the inanity of American culture.

Looking back, it seems dream-like for a critical examination of literary novels to ignite such a furor. I can’t imagine such a thing happening today. Then again, it seemed equally unimaginable twenty years ago.

History of Manifesto

Fed-up with fawning reviews of works like Timbuktu and All the Pretty Horses, Myers first wrote his manifesto in 1999. Using careful, reasoned prose punctuated with wit and scathing humor, he roasted passages from prize-winning books—passages which had been the subject of so much praise by literary reviewers as examples of masterful writing. Using tried-and-true close-reading techniques, he punctured these writers’ obtuse and repetitive language to reveal their prose to be turgid, meaningless, and pretentious.

Myers was convinced no magazine or newspaper would publish his critique. He was an unknown in the literary world; a near-anonymous monograph on the quality of modern literary prose hardly promises to fly off bookstore shelves.

So Myers did what many writers would do in later years: He self-published his manifesto on Amazon. He titled it Gorgons in the Pool: The Trouble with Contemporary “Literary” Prose after a particularly choice passage in a Cormac McCarthy novel. “Nothing happened,” he later wrote. “I went online and ordered three copies for myself; they were the only ones ever sold.”

One of the copies he mailed out wound up in the hands of an Atlantic editor, who offered to publish rather than review it. The Atlantic demanded severe cuts and revisions, and the version published in the magazine comes off nastier than he’d intended. He also had the gut-wrenching task of waving off the Times Literary Supplement from publishing a review of Gorgons, as he’d already signed a contract with The Atlantic. (“As someone said to me the other day, ‘How do you know [Times Literary Supplement] wasn’t going to tear you apart?'” he later remarked. “I suppose everything worked out for the best.”) Bad timing would develop into a cadence for Manifesto.

Gorgons in the Pool by B. R. Myers

The Atlantic article, tucked away deep inside the July/August double-issue, improbably made Myers a name-brand overnight among contemporary lit readers and writers. His outsider status only buffed his credentials as a hard-nosed reviewer. Even his use of first initials added a mysterious air to his origins. Although he received praise from many quarters, it mostly came from readers and (interestingly) journalists, a profession notorious for attracting writers shut-out of the book publishing world.

Although the literati initially ignored the essay, drumbeats of support from readers for Myers basic thesis—modern lit is pretentious—soon couldn’t be denied. Much of the early criticism directed back at Myers originated from book reviewers, book supplement editors, and literary novelists. Some of it was quite vitriolic, outraged anyone could suggest the writers he selected weren’t unassailable geniuses. Many exuded an air of befuddled annoyance: How could anyone give Myers or his thesis an ounce of credence? A few were outright smug about it, as though their refutations slammed the door on Myers and put an end to the dreary affair once and for all.

It didn’t work. The rebuttals only stoked increased support for Myers from readers around the world. The back-and-forth debate raged online and, as a mark of the times, across letters-to-the-editor pages, which printed point and counterpoint letters for weeks. This simply did not happen, even in a time when most people had their news delivered to them via bicycle.

Frustrated, the literary professional class took up what is today recognized as a surefire stratagem for shutting down an Internet debate: They doxxed him.

Not exactly—while The New York Times Book Review didn’t print Myers’ phone number and street address, they did see fit to delve into his past for anything incriminating (much like the Twitterati today will dumpster-dive people’s feeds to dig up embarrassing tweets from eight years ago). Demonstrating the ethics of a tabloid reporter, editor Judith Shulevitz dished to her readers that Myers was a foreigner (he’s not) who lived in New Mexico (i.e., not New York City) and was at that moment preparing to spend a year in Seoul “teaching North Korean literature to the South Koreans.” (Myers’ response: “I would probably have described my job in a way less calculated to evoke the phrase ‘selling ice to the eskimos.'”)

Shulevitz wrote Myers “is not just a man without a stake in the literary establishment. He is foreign to it in every way.” His manifesto could have

proved that a critic needs nothing more than taste to make a case. Does Myers’s essay do all this? It does not, because Myers doesn’t have a sure grasp of the world he’s attacking.

Most of the denunciations of Manifesto are steeped in this kind of a haughty condescension, and it served Myers well.

(I should add that I’m uncomfortable throwing around the phrase “literary establishment” as a catch-all for a wide and disjointed segment. Yet Shulevitz seemed comfortable acknowledging its existence in 2001, so I’ll assume it existed then and exists today.)

Manifesto continued to be a lodestone of bad timing. The Times‘ nativist pillorying of Myers was published on September 9, 2001. Two days later, the Times—and the rest of the world—was focused on a very different subject. The literary debate Myers had sparked that summer ground to a halt.

The history of Manifesto could easily have ended with the attacks on the World Trade Center, if not for events which nudged a little harder on the snowball Myers had started rolling in 1999.

First was Oprah selecting Jonathan Franzen’s The Corrections for her book club. To get an idea of how close this shaved against Myer’s Manifesto—and his continued game of footsie with bad timing—the same edition of the New York Times Book Review that exposed Myers as a Korean-teaching foreigner also included a glowing review of The Corrections laden with an irony of Oedipal proportions: The reviewer gives a winking approval that the book contains “just enough novel-of-paranoia touches so Oprah won’t assign it and ruin Franzen’s street cred.” Actually, Oprah was set to announce The Corrections as her next book club pick four days later (only to postpone it due to 9/11). When Franzen bristled that Oprah was attempting to smarten-up her book club by associating it with the “high-art literary tradition,” a new literary controversy erupted to displace Manifesto.

Although the imbroglio between Oprah and Franzen is better framed as tabloid-level tit-for-tat, Manifesto played a minor role. Online commenters made the point that Myers’ gripes about the literary establishment sneering down on the reading public were playing out before the nation’s eyes. Gone was his critics’ suggestion that, on this point, Myers was jousting with windmills.

The second event was Melville House publishing A Reader’s Manifesto: An Attack on the Growing Pretentiousness in American Literary Prose in 2002 (one of the two first books produced by the then-fledgling publisher). This full-length treatment gave Myers the opportunity to restore much of what was lost from Gorgons in the Pool when it was adapted for The Atlantic. It’s this edition I’ve based this review on.

The backward glance

The Atlantic Monthly, July/August 2001
The Atlantic Monthly, July/August 2001.

I vividly recall reading “Manifesto” in the summer of 2001. I’d written my first novel and was discovering the ego-melting process called “finding a literary agent.” Over the prior years I had enrolled in evening and weekend creative writing courses around the Bay Area, where many of the books Myers lay judgment upon were held up as models exemplar. Also at the time I was a member of a weekly “writers’ reading group.” A member of the group handed me a Xerox of The Atlantic essay along with a half-joking warning not to take anything this Myers guy has to say too seriously.

I wound up taking B. R. Myers quite seriously. I had never read anything like “A Reader’s Manifesto.” Rereading Myer’s book for this post, I still marvel over his concision and convictions. It can be read in a single sitting, and unless you’re a grump, it will keep you engaged from start to finish. Myers understands well the game he’s taken up: He can’t poke a stick at others’ bad prose if his own prose is lacking. His manifesto is meticulous, refreshing, lively, and enlightening, as seen here when he trains his gimlet eye on McCarthy’s All the Pretty Horses:

As a fan of movie westerns I refuse to quibble with the myth that a rugged landscape can bestow an epic significance on the lives of its inhabitants. But as Conrad understood better than Melville, the novel is a fundamentally irreverent form; it tolerates epic language only when used with a selective touch. To record with the same majesty every aspect of a cowboy’s life, from a knife-fight to his lunchtime burrito, is to create what can only be described as kitsch.

Not only is this arguable, there’s a lot packed in there to argue with: I find this to be a positive.

Or here, where he’s analyzing David Guterson’s output:

…a slow tempo is as vital to his pseudo-lyrical effects as a fast one is to Proulx’s. What would otherwise be sprightly sentences are turned into mournful shuffles through the use of tautology. “Anything I said was a blunder, a faux pas,” “a clash of sound, discordant,” “She could see that he was angry, that he was holding it in, not exposing his rage,” “Wyman was gay, a homosexual,” and so on.

This level of tight engagement with the work at hand shows this is well above the usual culture-war crap that’s saturated our nation’s dialogue for decades now.

Some of his lines of attack are novel. Performing a close and scathing read of Annie Proulx’s self-approving dedication in Close Range (“my strangled, work-driven ways”) is the kind of antic you’d expect of the University Wits or Alexander Pope. His oft-quoted rejoinder to an exchange between Oprah and Toni Morrison is his most acidic and least endearing: “Sorry, my dear Toni, but it’s actually called bad writing.” (Less oft-quoted is his explanation: “Great prose isn’t always easy but it’s always lucid; no one of Oprah’s intelligence ever had to puzzle over what Joseph Conrad was trying to say in a particular sentence.”)

Regardless of what you might have read elsewhere, the boilerplate attacks on Myers don’t stand up to scrutiny. Supposedly he values plot over form; he disdains “difficult” books; he cherry-picked bad passages from the books he attacks; he selected writers who’d gone out of fashion; or the confounding claim that he’s a humorless cur prone to sarcasm and snide shots. Having read his book at least four times now, I say none of these complaints hold water. (Sarcasm may be the lowest form of wit, but it’s not humorless.) I’m not saying there’s no room for criticizing Manifesto, only that dismissing Myers without engaging his points is not fruitful.

And there’s plenty in Manifesto for writers to take away. Rather than being satisfied with throwing spitballs at modern lit, he contrasts prose he finds vapid with prose that stands up. Myers will forever get grief for quoting Louis L’Amour’s Hondo with approval, but the passage he includes is a model of clean, effective writing that succeeds in characterizing the protagonist with the deftness of a parable. Myers makes the point several times that the prose he’s complaining about could have been written with less-pompous English, and takes a few stabs at editing it as proof. He’s engaged with the texts under the gun, a marked difference from his critics who sniff down on him (and, it seems, cannot be bothered to quote and refute his specific claims).

My take-away from Manifesto for writers is, don’t produce affected writing, produce affecting writing: Language that stirs the reader and shines a light rather than obscures. Good editing requires close reads of your prose, and questioning what every word is doing in a sentence. Ditch the idea that affecting prose is “easy” and affected prose is “difficult,” an avant-garde pose. One critic complained “‘prose,’ for [Myers], equals syntax plus diction, and is expected to denote, rather than to evoke.” I think he expects it to do both.

Revolt of the reading public

The significance of Myer’s Manifesto is not a perverse thrill of taking down holy cows like McCarthy and DeLillo, but how eerily it presaged the next twenty years in American publishing. The circuitous route Myers followed from Gorgons in the Pool to The Atlantic Monthly to Melville House is a once-in-a-generation aberration, but the elements of getting said critique out of the word processor and into the hands of readers rings awfully familiar today.

When I read in 2002 of Myers self-publishing Gorgons on Amazon, I was floored: I had no idea such an opportunity was available to mere mortals. It was a bona fide light-bulb moment, the first time I pondered the possibility of making an end-run around the New York City publishers and selling my work directly to readers. Ten years later, not only was Amazon still open to self-publishing, the company was rapidly tooling up to make publishing your own e-book as easy as clicking a mouse button.

Less obvious today, but notable in 2001, was Myers praising Amazon user reviews (of the books Myers was criticizing, not his own overlooked Gorgons). Before Manifesto, any reference in the popular media to Amazon’s user reviews was bound to be dismissive or sardonic. Back then, cultural commentators saw putting opinion-making into the hands of readers as ludicrous as a truck driver penning a starred Michelin review. (Don’t forget, there were still people in 2001 arguing the Internet was a passing fad—that it was faster to drive to the bookstore and buy a book than for Amazon to deliver it, ergo Amazon’s days were numbered.) Myers didn’t merely approve of Amazon user reviews, he used them as evidence that readers can and do understand difficult literature. I believe this is the first time I saw anyone in the cultural sphere do this.

Self-publishing; “average people” versus the experts; the power of reader reviews; the pseudo-doxxing Myers was subjected to; online discussion boards keeping the debate alive; and vitriolic denunciations from on high. All that’s missing is a hash tag and some Bitcoin changing hands, and the dust-up around Manifesto would sound like any number of social media episodes we’ve seen in recent years.

Martin Gurri’s The Revolt of the Public deserves mention here. Although I’ve not read it, I have read plenty of reviews and analyses, simply because this 2014 book is claimed to have predicted the rise of Donald Trump, Brexit, cancel culture, the Capitol Hill attacks, QAnon, #MeToo, and more. (It too was self-published on Amazon.)

Gurri’s thesis is that the Internet is destabilizing public respect for institutional authority and, in due course, undermining the authorities’ control over social and political narratives. The expert class, once considered the final word, now must defend itself from an increasingly skeptical public.

It seems to me that the narratives being disrupted by digital communications may not merely be political narratives but also traditional ones—the narratives offered by the literary novel, and the narratives sold to the public by the literary expert class. Not only are big-name authors being treated with skepticism by the general public, so are the stories they’re proffering as significant both in terms of literary heft and their cultural insights. Look no further than the controversy surrounding last year’s American Dirt by Jeanine Cummins for an example of voices from below shouting up at the ensconced above, or the backlash suffered by Sarah Dessen after shaming a critical reader.

The disruption to the literary world even extends to novelists’ fawning reviewers. There is less distinction here than would first appear: Literary novels are often reviewed by other literary novelists. This incestuousness would be a scandal in other fields. “Imagine what would happen if the Big Three were allowed to review each other’s cars in Consumer Reports,” Myers noted in an interview. “They’d save the bad reviews for outsiders like the Japanese.”

A before-and-after example of the Internet’s effect on the publishing world is Lorenzo Carcaterra’s Sleepers (1995) and James Frey’s A Million Little Pieces (2003). Both were mega-bestsellers whose publication dates bookend the Internet’s ascension in daily life. Both were published as memoirs, and both had their factual accuracy challenged. The mass media reported the controversy around Sleepers by copy-and-pasting publisher press releases and quoting book agents. A Million Little Pieces was put under the Internet’s collective magnifying glass thanks to an investigation by the amateur web site The Smoking Gun.

This people-powered exposé became a nightmare for James Frey, and his reputation never recovered. Editions of A Million Little Pieces (another Oprah book club pick!) now include a publisher’s note warning of “certain embellishments” and “invented” details: “The reader should not consider this book anything other than a work of literature.”

Carcaterra largely escaped unscathed in 1995 thanks to the controversy being framed by the media as a publishing industry squabble. Sleepers remains sold as memoir. (Funnily enough, it’s also listed under Amazon’s “Hoaxes & Deceptions” category.) Carcaterra’s luck can be measured in years. If Sleepers had been a bestselling memoir in 2005, the Internet would have torn it to shreds.

“Leaders can’t stand at the top of pyramids anymore and talk down to people,” Martin Gurri writes. “The digital revolution flattened everything.” I say A Reader’s Manifesto was the initial deflating puncture of the literary world’s cozy status quo.

Engendered reputations

In the conclusion of Manifesto, Myers writes:

I don’t believe anything I write will have much effect on these writers’ careers. The public will give them no more thought in twenty years than it gives, say, Norman Rush today, but that will have nothing to do with me, and everything to do with what engendered their reputations in the first place.

(If you’re wondering who Norman Rush is, I confess I had to look him up myself.)

Some of the rebuttals directed at Myers in 2001 claimed a few of these authors were already “on their way out,” although each critic seemed to formulate a different list of who remained relevant and who was exiting stage left. I’m tempted to produce a list of the writers whose work Myers criticized to see where their reputations stand today. I won’t do that; any reader so inclined could make such a list on their own.

I will point out that some of Myers’ subjects have sunk into a comfortable life of teaching, penning the occasional pop culture piece, and a general resting upon of laurels. Myers makes a couple of pointed barbs about Old Man and the Sea, but at least Hemingway was still throwing left-hooks at the end of his life.

(When Myers’ critics claim that literary book awards and glowing reviews in highbrow magazines are meaningless, or that Myers ignored genre fiction’s own system of awards and reviews, they’re overlooking the enduring social capital of “literary significance.” A science-fiction writer receiving big-time accolades in 2001 is not going to be, in 2021, a tenured professor traveling the writer’s retreat circuit as a featured speaker and penning fluffy think pieces for Harper’s. The self-propelling feedback loop that is the literary world should not be discounted.)

Note that Myers leaves unsaid what exactly engendered these authors’ reputations in the first place. The optimist in me thinks he’s referring to the evanescence of their writing postures—live by the sword, die by the sword.

The pessimist in me suspects what really engendered their reputations is a resilient enabling literary class which eagerly maintains its country-club exclusivity while claiming commitments to diversity. Even in the face of a massive shift in digital publishing, and the concomitant explosion of voices now available via e-books and print-on-demand, the literary establishment remains a closed shop. Its reviewers walk hand-in-hand with big publishers, who in turn regularly ink seven-figure publishing deals and expect a return on said investment. Positive reviews in well-placed periodicals are an important component of any publishing marketing plan. (The podcast “Personal Rejection Letter” explored this question in 2017, along with a retrospective of Myer’s Manifesto.)

In other words, the authors Myers put under the microscope may or may not be relevant twenty years later, but the system that held them aloft remains alive and strong. The Internet has kneecapped it some—the literary establishment is less commanding than it once was—but it’s still humming along.

Could Myers have at least shifted the conversation? I say he did. While Jonathan Franzen’s 1996 “Perchance to Dream” (re-titled “Why Bother?”) and Tom Wolfe’s 1989 “Stalking the Billion-Footed Beast” are both considered modern literary manifestos of great import, it’s plain to me that Myers’ Manifesto has shown far more staying power in the public’s and writers’ consciousness. Even in a 2010 critical response to B. R. Myers review of Franzen’s Freedom, the comments section swings back and forth on the significance of Myer’s Manifesto, with the most recent comment coming in 2016. There are YouTube videos produced as late as last year going over the debate Myers ignited twenty years ago.

Meanwhile, in creative writing courses across America, mentioning Myers’ name will still earn an eye-roll and a dramatic sigh from the instructor, wordlessly asking when this guy will just go away.

The double-edged sword

Ally Sheedy as Allison Reynolds

In The Breakfast Club, introverted Allison dares rich-girl Claire to say if she’s a virgin. When Claire demurs, Allison says,

It’s kind of a double-edged sword isn’t it? … If you say you haven’t [had sex], you’re a prude. If you say you have, you’re a slut. It’s a trap.

This is how I feel when the question comes up about the distinction between literary and genre fiction. If you write literary novels, you’re a prude. If you write genre books, you’re a slut.

Is it really that simple? Nothing in this world is so simple. Yet, here are some true-life examples from my own experiences:

Prude

While shopping around my first novel, I got a tip that a prestigious national imprint had a new editor seeking fresh manuscripts. I sent mine along, hopeful but also realistic about my chances.

The rejection slip I received was fairly scathing. The editor claimed my book read of a desperate MFA student who doesn’t understand the “real world.” It was fairly derogatory (and oddly personal, considering this editor and I shared a mutual friend). A simple “thanks, no thanks” would have sufficed, but this editor decided it was my turn in the barrel.

Make no mistake: This hoity-toit imprint reeks of MFA aftershave. It’s not a punk-lit imprint. It’s not an edgy alt-lit imprint. It publishes high-minded literary fiction. The author list is upper-middle- to upper-class, blindingly white, and yes, many of them hold an MFA.

And I hold an MFA too, so perhaps the criticism is spot-on—except I wrote the bulk of novel before I set foot in grad school. I didn’t aim for it to be a literary masterpiece. I wanted to write a page-turner. It’s categorized as literary fiction because it’s not mystery, science-fiction, fantasy, romance, Western, thriller, or YA/New Adult. Write a story about a character and his family, and it’s not merely literary, you’re trying to “be literary.” Who knew?

In my novel, the main character has grown up in a town of physicists who design and perfect weapons of mass destruction—this is the actual childhood I experienced. I thought it would be a good read. (It is a good read.) My character is snarky, sarcastic, crude—and at times, he can be a right asshole. The technical background of the novel is, as they say, ripped from the headlines.

This seems pretty real-world to me. I thought I was writing a funny novel with an unusual setting and situation. This editor took it upon herself to declare I’m actually a Raymond Carver-esque hack penning quiet stories of bourgeois desperation. And that I should stop being that writer.

So, there’s the rejection slip telling me to quit being literary, even though that’s a categorization I never asked for. And it came from a literary publishing house. It’s kind of a double-edged sword, isn’t it?

Slut

After Amazon published my second novel, I began to sense a change in the attitudes of many of my writer friends. At first it was slight, like a shift in air movement when a door in the room is opened. Gradually, though, the emotional tension grew to the point it could not be denied.

I wondered if the problem was one of jealousy. My book had been picked up by a large company, but Amazon was not what you would call an A-list publisher (back then, at least—times have changed). And, they only published my book in digital Kindle format. I had to rely on CreateSpace to offer a paperback edition. The advance money was not huge, and the publicity not so widespread. It all seemed pretty modest to me, and I thought my friends would recognize it as such.

My novel is set in an alternate universe where human reproductive biology is tweaked in a rather significant way. This book is obviously science-fiction. Since the protagonist is a thirteen-year-old girl, it neatly fits into the YA slot as well.

And I’m comfortable with those categorizations. I grew up reading Asimov, Bradbury, Silverberg, and other science-fiction writers of the Golden and Silver Ages who laid so much groundwork for the genre. More importantly, I wanted to write another page-turner, a real unputdownable book. From the Amazon reviews, I think I succeeded.

The tip-off for the issue with my friends was when my wife asked one of them if she’d read my new book. The answer was a murmured, “I would never read a book like that.” This from a person I counted as a friend, and had known for ten years.

Before this, I’d heard her repeat the trope that all genre fiction is formula, as mindless as baking a cake from a box of mix. I always let it go, for the sake of harmony. Now it was being thrown in my face.

The funny thing is, one Amazon editor told me she felt in hindsight my science-fiction YA novel was not a good fit for their imprint. They were more interested in “accessible” genre fiction for their readers, and that my work was—yep—too literary. It’s a trap.

Tease

When Claire refuses to reveal if she’s a virgin, bad-boy Bender suspects she’s a tease:

Sex is your weapon. You said it yourself. You use it to get respect.

Between being a literary author and a genre writer, there’s a third way: The literary-genre writer. These are the teases. They write genre fiction, but make it literary to get respect. And, often they do.

Examples of teases are Haruki Murakami, China Miéville, Cormac McCarthy, and Margaret Atwood. Much of their work is patently genre, but they are received and analyzed with the same awe and respect reserved for literary novelists.

The knee-jerk reaction is to say these writers prove it’s possible to write literary-genre fiction. I don’t think that’s true at all, though. It only proves that authors accepted into the literary realm get to have it both ways: They avoid the stigma of genre fiction while incorporating the high-stake dramatic possibilities genre fiction offers.

Consider another literary-genre writer: Kurt Vonnegut. He wrote science-fiction, but his books are rarely shelved in that section. Hell, he even wrote a diatribe about how bad science-fiction writing is (Eliot Rosewater’s drunken “science-fiction writers couldn’t write for sour apples” screed). Yet, Vonnegut is rarely, if ever, permitted into the same circle as Atwood or McCarthy. There’s something “common” about Vonnegut. Only at the end of his life was he cautiously allowed into the literary world. Some still say he doesn’t belong there.

I remain unconvinced it’s the sophistication of a novel itself that moves it into the upper literary tiers. I can point to plenty of books supposedly in the literary strata that are not exceedingly well-written or insightful. Something other than an airy quality is the deciding factor.

The success of a handful of literary-genre writers doesn’t open doors, it only creates a new double-edged trap. An author who pens a literary-style novel can claim it’s literary. See, he added his book to the “Literary Fiction” section on Amazon! But does it mean he’s a member of the literary world? Not at all. There’s something else holding him back.

The trap

The literary/genre distinction purports to explain every aspect of a story: Its relevance, its significance, its quality, its audience, even the goals of the writer when they sat down to write it. Nothing in this world is so simple.

There’s a smell about the literary/genre divide. It smells like class. Literary is upper-class, and pulpy genre is for the proletariat. This roughly corresponds to the highbrow/lowbrow classifications. We even have a gradation for the striving petty bourgeoisie, middlebrow.

(Even calling a novel “middlebrow” is treated with disdain—a lowbrow attempt to raise a genre book to a higher status. It’s easy to fall down the literary/genre ladder, but difficult to ascend.)

I definitely believe the Marxist notion of class exists, both abroad and here in the United States. What I don’t believe is that a work of fiction is “of a class.” Books are utilized as a marker of class—tools to express one’s status. Distinctions like literary vs. genre communicate to members of each class which books they should be utilizing…I mean, reading.

Amazon says new Kindle replicates experience of holding real book cover in public

This is not the most original thought, but is it really that simple? Nothing in this world is so simple. And I don’t want it to be simple. As with food, the best reading diet is varied, eclectic, and personal.

Note the real damage here. If a writer writes the books he or she wants to write, and puts their heart and soul into making it the highest-quality they can for their readers, all that hard work is instantly deflated by the literary/genre prude/slut highbrow/lowbrow labels.

And if a writer introduces genre conventions into their literary work, they’re a sell-out—a prude tarting it up for cheap attention. And if the author of a genre novel tries to achieve a kind of elegance with their prose and style, they’re overreaching—a slut putting on a church dress. You use it to get respect. We’re punishing people for being ambitious.

I’ve said it elsewhere: People will judge a book by its cover, its publisher, the author’s name, the number of pages, the title, the price, the infernal literary/genre label, its reviews, the number of stars on Amazon—everything but the words between the covers. You know, the stuff that matters.