Not rethinking realism, as in rethinking philosophy’s single, objective reality, hard as rocks and nails. No, I mean rethinking realism in the sense of questioning the elevation of literary realism over the many other forms of fiction.
Realism has long been the go-to form in literature for telling a story a certain way. An entire literary style—Naturalism—sprung from the sense that Romanticism had gone too far and produced a literature divorced from the world as commonly experienced. The pendulum later shifted the other direction, and for a period of time realistic literature was derided as bourgeois and reactionary. Since World War II, with the rise of creative writing programs and a reinvigorated enforcement of upper-class distinctions, kitchen-table realism has returned to the pinnacle of literary loftiness in America.
So it’s funny to me that realism is also so important in popular entertainment. This is nowhere as true as with television, which is obsessed with depicting reality—from the “you are there”-style news reporting to game shows branded as “reality TV.” When the writers of TV’s M*A*S*H killed off Col. Henry Blake in a season finale, they were inundated with letters from outraged viewers. The Emmy award-winning writing team’s response was, “Well, that’s reality.” American auteur Robert Altman famously ends Nashville with an out-of-the-blue assassination of a central character. Why? Because, he explained, that’s reality.
It’s not that these plot points are faulty or wrong-headed. My complaint is that the excuse—”It’s reality”—is a lazy defense of artistic choices. Writers should cop to their decision rather than take the passive route and saying reality made the choice for them. Writers should ask themselves if a “realistic” moment is adding to, or subtracting from, the story.
Anyone who’s attended a creative writing class, workshop, or MFA program is familiar with the high ground presumed by realism. The trendy term is “psychologically realistic fiction.” In writing programs, names like Raymond Carver, Amy Hempel, Tobias Wolff, and Tim O’Brien are tossed out as the zenith of American writing. Students are explicitly encouraged to emulate them, and their importance is implicitly signaled by their repeated presence in syllabi and required-reading lists. (I’ve read “The Things They Carried” at least eight times over the course of decades of writing groups and classes.) These authors are lionized for many reasons, but importantly, they all wrote about reality.
(There are two exceptions worth mentioning: One is magical realism, although its high regard in writing programs is tied up with identity politics. The other is Borges, whom I jokingly refer to as science-fiction for MFA students. It must be noted that both exceptions originate from outside the United States. Kafka, incidentally, is read and praised in writing programs as well, but not in such a way as to encourage emulation—I suspect my instructors liked the idea of Kafka more than Kafka’s output.)
Look at how so much literary fiction operates. Protagonists tend to be thoughtful, rational, and deliberative—often, they exhibit little to no affect. Characters in opposition tend to be boorish, thoughtless, and emotional. Dialogue is either flat and unadorned, or snappy, like the patter of a stand-up comic. Scenes flow as one character uttering a brief line, followed by paragraphs of rumination. The other character responds, and more paragraphs of rumination.
The prose might be good—it might even be inspired—but is this realism? Going through contemporary literary magazines, reading one story after another, I’m not sure one will find a lot of psychological realism, in the sense of psychiatry’s DSM-5.
Genre fiction is not immune either. Too often connoisseurs of hard-boiled detective fiction and tough-guy novels claim their favorite authors are superior because of their attention to realism. Raymond Chandler’s “The Simple Art of Murder” is wonderful and insightful criticism, but at its heart is a trashing of the classic British mystery because “fiction in any form has always intended to be realistic.” It’s one of the few arguments in the essay that I question.
Janet Burroway wrote, “Sometimes reality doesn’t make for good fiction.” It’s a tough lesson to learn, and one that even seasoned writers fail to grasp.
After all, there is no widely-accepted maxim stating the primary purpose of story is to reproduce reality. Fiction is supposed to be an expression of a writer’s inner state, not a dry report of the who, what, where, and when. Besides, why do we need to reproduce reality with such fidelity? We’re soaking in it. If you want reality, put down your phone or leave your computer screen. You have returned to reality, effortlessly.
In a writing class I attended, one of the students was a fan of horror, particularly H. P. Lovecraft and Robert Chambers’ The King in Yellow. At an end-of-semester presentation before the class, he expressed frustration at the hard-realism reading list we’d been given, and of the months of instruction requiring him to write in similar form. “Reading about reality is like reading about your job on your day off,” he told us. There’s something to that.
Story creates a transcendence within the reader. This transcendence defies reality while mimicking it—reality is Play-Doh in the hands of an adept writer. From hard realism to squishy-soft fantasy and everything in-between, great writing takes me to another place and time, a chance to live another person’s life. Books are “portable dreamweavers.”
Wearily, I started his essay expecting more of the same, and lo, finding it: Computers and the Internet, he contends, has done much to destroy literary fiction. By this point, I’m surprised any writer pursuing such a thesis would bother fortifying their argument with examples or statistics. Blythe does not fail on that count either: Other than some “c’mon, look around, you know what I’m saying,” the argument is made sans corroborative evidence. Of course the Internet has wrecked American literature. Why bother denying it?
It’s telling, then, that Blythe opens with the usual barrage of accusations about digital distractions—”Can you read anything at all from start to finish, i.e. an essay or a short story, without your mind being sliced apart by some digital switchblade?”—and then, to prove how things used to be so much better way back when, he segues to life as an Esquire editor in the 1980s and 90s:
[Rust Hill] and I would occasionally drink two or three Negronis at lunch, sometimes at the New York Delicatessen on 57th Street, and talk about the writers and novels and short stories we loved (and hated). … Then he and I would happily weave our way back to the office at 1790 Broadway, plop down in our cubicles and make enthusiastic phone calls to writers and agents, our voices probably a little louder than usual.
The jokes about fiction editors at a national magazine choosing stories to publish after a three-cocktail lunch write themselves, so I won’t bother. (Although I should, since, as an early writer, I had high hopes for placing a short story with a publication like Esquire. Perhaps I should have mailed a bottle of Bombay with each of my submissions.)
The dichotomy Blythe illustrates is telling: The hellish “after” is the mob writing Amazon user reviews and him not knowing how to turn off iPhone notifications; the blissful “before” is editorial cocktail lunches and not having to give a rat’s ass what anyone else thinks.
One counterpoint to Blythe’s thesis: The 1980s had plenty of distractions, including the now-obvious inability to silence your telephone without taking it off the hook. Another counterpoint: If you want to drink Negronis and argue literature over Reubens, well, you can do that today too. A third counterpoint: A short story printed in the pages of Esquire was sandwiched between glossy full-color ads for sports cars, tobacco, and liquor—most featuring leggy models in evening gowns or swimsuits. Distractions abounded, even before the Internet.
But none of these are what Blythe is really talking about. What he bemoans is the diffusion of editorial power over the past twenty years.
Blythe throws a curveball—a predictable curveball—after his reminisces about Negronis and schmears. Sure, computers are to blame for everything, but the real crime is that computers now permit readers to make their opinions on fiction known:
Writers and writing tend to be voted upon by readers, who inflict economic power (buy or kill the novel!) rather than deeply examining work the way passionate critics once did in newspapers and magazines. Their “likes” and “dislikes” make for massive rejoinders rather than critical insight. It’s actually a kind of bland politics, as if books and stories are to be elected or defeated. Everyone is apparently a numerical critic now, though not necessarily an astute one.
I don’t actually believe Blythe has done a thorough job surveying the digital landscape to consider the assortment and quality of reader reviews out there. There are, in fact, a plenitude of readers penning worthy critical insight over fiction. Just as there are so many great writers out there that deserve wider audiences, there also exist critical readers who should be trumpeted farther afield.
Setting that aside, I still happily defend readers content to note a simple up/down vote as their estimation of a book. Not every expression of having read a book demands an in-depth 8,000 word essay on the plight of the modern Citizen of the World.
Rather, I believe Blythe—as with so many others in the literary establishment—cannot accept readers could have any worthwhile expressible opinion about fiction. The world was so much easier when editors at glossy magazines issued the final word on what constituted good fiction and what was a dud. See also a book I’m certain Blythe detests, A Reader’s Manifesto, which tears apart—almost point by point—Blythe’s gripes.
When B. R Myers’ Manifesto was published twenty years ago, a major criticism of it was that Myers was tilting at windmills—that the literary establishment was not as snobbish and elitist as he described. Yet here Blythe is practically copping to the charges.
Thus the inanity of him complaining that today’s readers hold the power to “inflict economic power” when, apparently, such power should reside solely with critics and magazine editors. I don’t even want to argue this point; this idea is a retrograde understanding of how the world should work. This is why golden age thinking is so pernicious—since things used to be this way, it was the best way. Except when it’s not.
Of course the world was easier for the editors of national slicks fifty years ago, just as life used to be good for book publishers, major news broadcasters, and the rest of the national media. It was also deeply unsatisfying if one were not standing near the top of those heaps. It does not take much scratching in the dirt to understand the motivations of the counterculture and punk movements in producing their own criticism. The only other option back then was to bow to the opinions of a klatch of New York City editors and critics whose ascendancy was even more opaque than the bishops of the Holy See.
That said, it’s good to see a former Esquire editor praise the fiction output of magazines that, not so long ago, editors at that level were expected to sneer down upon: Publications such as Redbook, McCall’s, Analog, and Asimov’s Science Fiction all get an approving nod from Blythe.
But to cling to the assertion that in mid-century America “short fiction was a viable business, for publishers and writers alike” is golden age-ism at its worst. Sure, a few writers could make a go at it, but in this case the exceptions do not prove the rule. The vast sea of short story writers in America had to settle for—and continue to settle for—being published in obscure literary magazines and paid in free copies.
No less than Arthur Miller opined that the golden age of American theater arced in his own lifetime. Pianist Bill Evans remarked he was blessed to have experienced the tail end of jazz’s golden age in America before rock ‘n’ roll sucked all the oxygen out of the room. Neither of those artistic golden ages perished because of the Internet.
What caused them to die? That’s complicated, sure, but their demise—or, at least, rapid descents—were preceded by a turn toward the avant-garde. Which is to say, it became fashionable for jazz and theater to distance themselves from their audience under the guise of moving the art forward. The only moving that happened, though, was the audience for the exits.
Blythe then turns his attention to a third gripe in his meandering essay. Without a shred of evidence, he argues that the digital revolution of the last twenty-five years metastasized into a cultural Puritanism in today’s publishing world:
Perhaps because of online mass condemnations, there’s simply too much of an ethical demand in fiction from fearful editors and “sensitivity readers,” whose sensitivity is not unlike that of children raised in religious families… Too many authors and editors fear that they might write or publish something that to them, at least, is unknowingly “wrong,” narratives that will reveal their ethical ignorance, much to their shame. It’s as if etiquette has become ethics, and blasphemy a sin of secularity.
I cannot deny that there appears to be a correlation between the rise of the Internet in our daily lives and the shift over the last decade to cancel or ban “problematic” literature. What I fail to see is how pop-up alerts or a proliferation of Wi-Fi hot spots is to blame for this situation.
If Blythe were to peer backwards once more to his golden age of gin-soaked lunches, he would recall a nascent cultural phenomenon called “political correctness.” P.C. was the Ur-movement to today’s sensitivity readers and skittish editors. Social media whipped political correctness’ protestations into a hot froth of virtuous umbrage—a video game of oneupsmanship in political consciousness, where high scores are tallied with likes and follower counts. Using social media as leverage to block books from publication was the logical next step. But blaming computers for this situation is like blaming neutrons for the atom bomb.
After a dozen paragraphs of shaking my head at Blythe’s litany of complaints, I was pleasantly surprised to find myself in agreement with him:
The power of literary fiction—good literary fiction, anyway—does not come from moral rectitude. … Good literature investigates morality. It stares unrelentingly at the behavior of its characters without requiring righteousness.
At the risk of broken-record syndrome, I’ll repeat my claim that Charles Baxter’s “Dysfunctional Narratives” (penned twenty-five years ago, near the beginning of the Internet revolution) quietly predicted the situation Blythe is griping about today. Back then, Baxter noticed the earliest stirrings of a type of fiction where “characters are not often permitted to make intelligent and interesting mistakes and then to acknowledge them. … If fictional characters do make such mistakes, they’re judged immediately and without appeal.” He noted that reading had begun “to be understood as a form of personal therapy or political action,” and that this type of fiction was “pre-moralized.”
Unlike Blythe, Baxter did not fret that literary fiction would perish. Baxter was a creative writing instructor at a thriving Midwestern MFA program. He knew damn well that writing literary fiction was a growth industry, and in no danger of extinction. What concerned him was how much of this fiction was (and is) “me” fiction, that is, centered around passive protagonists suffering through some wrong. He noticed a dearth of “I” fiction with active protagonists who make decisions and face consequences.
As Blythe writes:
Too many publishers and editors these days seem to regard themselves as secular priests, dictating right and wrong, as opposed to focusing on the allure of the mystifying and the excitement of uncertainty. Ethics and aesthetics appear in this era to be intentionally merged, as if their respective “good” is identical.
If Blythe is going to roll his eyes at the glut of reader-led cancellations and moralizing editors, perhaps he could consider another glut in the literary world: The flood of the literary memoir, with its “searing” psychic wounds placed under microscope, and its inevitably featherweight closing epiphany. These testaments of self-actualization may be shelved under nonfiction, but they are decidedly fictional in construction. In the literary world, stories of imagination and projection have been superseded by stories of repurposed memory, whose critical defense is, invariably, “But this really happened.”
It was not always so. Memoir was once synonymous with popular fiction. Autobiography was reserved for celebrities such as Howard Cosell and Shirley MacLaine, or a controversial individual who found themself in the nation’s spotlight for a brief moment. It was not treated as a high art form, and perceived in some quarters as self-indulgent. No more.
There remains an audience for great fiction. Readers know when they’re being talked down to. They know the difference between a clueless author being crass and a thoughtful author being brutally honest. They also know the difference between a ripping yarn and a pre-moralized story they’re “supposed” to read, like eating one’s vegetables.
The death of literary fiction—especially the short story—will not be due to iPhone notifications and social media cancellations. Perhaps the problem Blythe senses is the loss of a mission to nurture and promote great fiction. The literary world has turned inward and grown insular. Its priorities are so skewed, I’ve witnessed literary writers question if fiction can even be judged or critiqued. The worsening relationship of class to literary fiction should not be overlooked, either.
If Blythe laments Asimov’s Science Fiction, perhaps he should check out the thriving Clarkesworld. Substacks of regular short fiction are regularly delivering work to thousands of readers. I don’t know if these publications’ editors are gulping down Negronis during their daily Zoom meetings—but as long as they’re putting out quality fiction that challenges and questions and enlightens, maybe that doesn’t matter, and never did.
What if I told you that there’s been a sea-change in American storytelling over the past half-century? Not merely a change in subject matter, but that the fundamental nature of American narratives radically shifted? Would you believe me?
Now, what if I told you that a writer twenty-five years ago described these “new” stories, and even predicted they would become the dominant mode in our future? Would you believe that?
In 1997, Charles Baxter published Burning Down the House, a collection of essays on the state of American literature. It opens with “Dysfunctional Narratives: or, ‘Mistakes were Made,’” a blistering piece of criticism that not only detailed the kinds of stories he was reading back then, but predicted the types of stories we read and tell each other today.
Baxter appropriated the term “dysfunctional narrative” from poet C. K. Williams, but he expounded and expanded upon it so much, it’s fair to say he’s made the term his own. He borrowed a working definition of dysfunctional narratives from poet Marilynne Robinson, who described this modern mode of writing as a “mean little myth:”
One is born and in passage through childhood suffers some grave harm. Subsequent good fortune is meaningless because of the injury, while subsequent misfortune is highly significant as the consequence of this injury. The work of one’s life is to discover and name the harm one has suffered.
Baxter adds that the source of this injury “can never be expunged.” As for the ultimate meaning of these stories: “The injury is the meaning.”
To claim this mode of writing has become the dominant one in American culture demands proof, or at least some supporting evidence. Baxter lists examples, such as Richard Nixon’s passive-voice gloss over the Watergate cover-up (“mistakes were made”), Jane Smiley’s A Thousand Acres, and conspiracy theories, among others.
“Dysfunctional Narratives” doesn’t succeed by tallying a score, however. Rather, it describes a type of story that sounds all-too-familiar to modern ears:
Reading begins to be understood as a form of personal therapy or political action. In such an atmosphere, already moralized stories are more comforting than stories in which characters are making complex or unwitting mistakes.
Don’t merely consider Baxter’s descriptions in terms of books. News stories, the social media posts scrolling up your daily feed, even the way your best friend goes into how their boss has slighted them at work—all constitute narratives, small or large. Dysfunctional narratives read as if the storyteller’s thumb is heavy on the moral scale—they feel rigged.
It does seem curious that in contemporary America—a place of considerable good fortune and privilege—one of the most favored narrative modes from high to low has to do with disavowals, passivity, and the disarmed protagonist.
(I could go one quoting Baxter’s essay—he’s a quotable essayist—but you should go out and read all of Burning Down the House instead. It’s that good.)
Dysfunctional narratives are a literature of avoidance, a strategic weaving of talking points and selective omissions to block counter-criticism. If that sounds like so much political maneuvering, that’s because it is.
“Mistakes were made”
Let’s start with what dysfunctional narratives are not: They’re not merely stories about dysfunction, as in dysfunctional families, or learning dysfunctions. Yes, a dysfunctional narrative may feature such topics, but that is not what makes it dysfunctional. It describes how the story is told, the strategies and choices the author had made to tell their story.
Baxter points to Richard Nixon’s “mistakes were made” as the kernel for the dysfunctional narrative in modern America. (He calls Nixon “the spiritual godfather of the contemporary disavowal movement.”) He also holds up conspiracy theories as prototypes:
No one really knows who’s responsible for [the JFK assassination]. One of the signs of a dysfunctional narrative is that we cannot leave it behind, and we cannot put it to rest, because it does not, finally, give us the explanations we need to enclose it. We don’t know who the agent of action is. We don’t even know why it was done.
Recall the tagline for The X-Files, a TV show about the investigation of conspiracy theories: “The truth is out there.” In other words, the show’s stories can’t provide the truth—it’s elsewhere.
More memorably—and more controversially—Baxter also turns his gaze upon Jane Smiley’s A Thousand Acres, which features the use of recovered memories (“not so much out of Zola as Geraldo“) and grows into “an account of conspiracy and memory, sorrow and depression, in which several of the major characters are acting out rather than acting, and doing their best to find someone to blame.”
In a similar vein, a nearly-dysfunctional story would be The Prince of Tides by Pat Conroy. It centers on a family man who, via therapy, digs through memories of a childhood trauma which has paralyzed him emotionally as an adult. He gradually heals, and goes on to repair his relationship with his family. Notably, his elderly father does not remember abusing him years earlier, leaving one wound unhealed.
Another example would be Nathanael West‘s A Cool Million, which follows a clueless naif on a cross-American journey as he’s swindled, robbed, mugged, and framed. By the end, the inventory of body parts he’s lost is like counting the change in your pocket. It might be forgiven as a satire of the American dream, but A Cool Million remains a heavy-handed tale.
This leads to another point: A dysfunctional narrative is not necessarily a poorly told one. The dysfunction is not in the quality of the telling, but something more innate.
Examples of more topical dysfunctional narratives could be the story of Aziz Ansari’s first-date accuser. The complaints of just about any politician or pundit who claims they’ve been victimized or deplatformed by their opponents is dysfunctional. In almost every case, the stories feature a faultless, passive protagonist being traumatized by the more powerful or the abstract.
There’s one more point about dysfunctional narratives worth making: The problem is not that dysfunctional narratives exist. The problem is the sheer volume of them in our culture, the sense that we’re being flooded—overwhelmed, even—by their numbers. That’s what seems to concern Baxter. It certainly concerns me.
A literature of avoidance
In his essay Ur-Fascism, Umberto Eco offers this diagram:
one
two
three
four
abc
bcd
cde
def
Each column represents a political group or ideology, all distinct, yet possessing many common traits. (Think of different flavors of Communism, or various factions within a political party.) Groups one and two have traits b and c in common, groups two and four have trait d in common, and so on.
Eco points out that “owing to the uninterrupted series of decreasing similarities between one and four, there remains, by a sort of illusory transitivity, a family resemblance between four and one,” even though they do not share any traits. The traits form a chain—there is a common “smell” between the political groups.
Not all dysfunctional narratives are exactly alike, or have the exact traits as the rest, but they do have a common “smell.” Even if a 9/11 conspiracy theory seems utterly unlike A Cool Million, they both may be dysfunctional.
Likewise, in the traits that follow, just because a story doesn’t include all doesn’t mean it “avoids dysfunction.” Rather, dysfunctional narratives are built by the storyteller selecting the bricks they need to buttress their message:
A disarmed protagonist
An absent antagonist
Minimal secondary characters
An authorial thumb on the scale
“Pre-moralized”
A vaporous conclusion
Authorial infallibility and restricted interpretations
The most common trait of the dysfunctional narrative is a faultless, passive main character. Baxter calls this the “disarmed protagonist.” Baxter differentiates between “I” stories (“the protagonist makes certain decisions and takes some responsibility for them”) and “me” stories (“the protagonists…are central characters to whom things happen”). Dysfunctional narratives are the “me” stories.
And the errors these “me” characters make—if any—are forgivable, understanding, or forced upon them by dire circumstances. Compare this to the mistakes the people around them make—monstrous, unpardonable sins:
…characters [in stories] are not often permitted to make interesting and intelligent mistakes and then to acknowledge them. The whole idea of the “intelligent mistake,” the importance of the mistake made on impulse, has gone out the window. Or, if fictional characters do make such mistakes, they’re judged immediately and without appeal.
Power dynamics are a cornerstone of all narratives, but one “smell” of the dysfunctional variety is an extraordinary tilting of power against the main character. The system, or even the world, is allied against the protagonist. Close reads of these narratives reveals an authorial thumb on the story’s moral scale, an intuition that the situation has been soured a bit too much in the service of making a point. This scale-tipping may be achieved many ways, but often it requires a surgical omission of detail.
Hence how often in dysfunctional narratives the antagonist is absent. A crime in a dysfunctional novel doesn’t require a criminal. All it needs, in Robinson’s words, is for the main character to have endured some great wrong: “The work of one’s life is to discover and name the harm one has suffered.”
Name the harm, not the perpetrator. Why not the perpetrator? Because often there’s no person to name. The harm is a trauma or a memory. The perpetrator may have disappeared long ago, or died, or have utterly forgotten the wrongs they inflicted (as the father does in Prince of Tides). The malefactor may be an abstraction, like capitalism or sexism. But naming an abstraction as the villain does not name anything. It’s like naming narcissism as the cause of an airliner crash. This is by design. Abstractions and missing antagonists don’t have a voice. Even Satan gets to plead his case in Paradise Lost.
No ending is reached in a dysfunctional narrative, because there’s only a trauma, or a memory, or an abstraction to work against. These injuries never heal. Memories may fade, but the past is concrete. By telling the story, the trauma is now recorded and notarized like a deed. “There’s the typical story in which no one is responsible for anything,” Baxter complained in 2012. “Shit happens, that’s all. It’s all about fate, or something. I hate stories like that.” These stories trail off at the end, employing imagery like setting suns or echoes fading off to signify a story that will never conclude.
The most surface criticism of these narratives is that we, the readers, sense we’re being talked down to by the author. “In the absence of any clear moral vision, we get moralizing instead,” Baxter writes. A dysfunctional narrative dog-whistles its morality, and those who cannot decode the whistle are faulted for it. The stories are pre-moralized: The reader is expected to understand beforehand the entirety of the story’s moral universe. For a reader to admit otherwise, or to argue an alternate interpretation, is to risk personal embarrassment or confrontation from those who will not brook dissent.
And making the reader uncomfortable is often the outright goal of the dysfunctional narrative. The writer is the presumed authority; the reader, the presumed student. It’s a retrograde posture, a nagging echo from a lesser-democratic time. (When I read A Brief History of Time, I was most certainly the student—but Hawking admirably never made me feel that way.) Dysfunctional narratives are often combative with the reader; they do not acknowledge the reader’s right to negotiate or question the message. With dysfunctional narratives, it’s difficult to discern if the writer is telling a story or digging a moat around their main character.
“What we have instead is not exactly drama and not exactly therapy,” Baxter writes. “No one is in a position to judge.” A dysfunctional narrative portrays a world with few to no alternatives. A functional narrative explores alternatives. (This is what I mean when I write of fiction as an experiment.)
This is why so many dysfunctional narratives are aligned to the writer’s biography—who can claim to be a better authority on your life, after all? But the moment a reader reads a story, its protagonist is no longer the author’s sole property. The character is now a shared construct. Their decisions may be questioned (hence the passive nature of the protagonists—inaction avoids such judgements). If the author introduces secondary characters, they can’t claim similar authority over them—every additional character is one more attack vector of criticism, a chipping away of absolute authority over the story itself. That’s what happened to sensitivity reader Kosoko Jackson in 2019, whose debut novel was pulped due to questions over his secondary characters.
Of all the traits listed—from the disarmed protagonist to the vaporous conclusion—the trait I find the “smelliest” is authorial infallibility and restricted interpretation. That’s why I used weasel language when I called Prince of Tides “nearly-dysfunctional:” The book is most certainly open to interpretation and questioning. In contrast, questioning a conspiracy theory could get you labeled an unwitting dupe, a useful idiot, or worse.
A Cambrian explosion
What Baxter doesn’t explore fully is why we’ve had this Cambrian explosion of dysfunctional narratives. He speculates a couple of possibilities, such as them coming down to us from our political leadership (like Moses carrying down the stone tablets), or as the byproduct of consumerism. I find myself at my most skeptical when his essay stumbles down these side roads.
When Baxter claims these stories arose out of “groups in our time [feeling] confused or powerless…in such a consumerist climate, the perplexed and unhappy don’t know what their lives are telling them,” it seems Baxter is offering a dysfunctional narrative to explain the existence of dysfunctional narratives. He claims they’re produced by people of “irregular employment and mounting debts.” I strongly doubt this as well. In my experience, this type of folk are not the dominant producers of such narratives. Rather, these are the people who turn to stories for escape and uplift…the very comforts dysfunctional narratives cannot provide, and are not intended to provide.
Rather than point the finger at dead presidents or capitalism, I’m more inclined to ascribe the shift to a handful of changes in our culture.
The term “The Program Era” comes from a book by the same name detailing the postwar rise and influence of creative writing programs in the United States. This democratization of creative writing programs was not as democratic as once hoped, but it still led to a sharp increase in the numbers of people writing fiction. Most of those students were drawn from America’s upwardly-striving classes. And, as part of the workshop method used in these programs, it also led to a rise in those people having to sit quietly and listen to their peers criticize their stories, sometimes demolishing them. (Charles Baxter was a creative writing professor and the head of a prominent writing program in the Midwest. Many of his examples in Burning Down the House come from manuscripts he read as an instructor.)
With the expansion of writing programs came a rise in aspiring writers scratching around for powerful subject matter. Topics like trauma and abuse are lodestones when seeking supercharged dramatic stakes. Naturally, these writers also drew from personal biography for easy access to subject matter.
Another reason is staring back at you: The World Wide Web has empowered the masses to tell their stories to a global audience. This has created a dynamic where everyone can be a reader, a writer, and a critic, and all at the same time.
The natural next step in the evolution of the above is for storytellers to strategize how best to defend their work—to remove any fault in the story’s armor, to buttress it with rearguards and fortifications. (This is different than working hard to produce a high-quality experience, which, in my view, is a better use of time.) And there’s been a shift in why we tell stories: Not necessarily to entertain or enrich, but as an act of therapy or grievance, or to collect “allies” in a climate where you’re either with me or against me.
Pick up a university literary magazine and read it from cover to cover. The “smell” of dysfunctional narratives is awfully similar to the smell of social media jeremiads.
These are not the kind of stories I want to read, but it’s becoming increasingly difficult to distance myself from them. Writers should strive to offer more than a list grievances, or perform acts of score-settling. If it’s too much to ask stories to explain, then certainly we can expect them to connect dots. Even if the main character does not grow by the last page, we should grow by then, if only a little.
White Noise is not the kind of book one associates with popular entertainment, nor its author as the kind of person to acquiesce to its adaptation.
This merely touches the surface of my reaction to Netflix’s latest project.
If you’re not familiar, the novel White Noise is a 1985 literary comedy about Jack Gladney, a “professor of Hitler studies,” and his nuclear family in a fictional Midwestern college town. The early chapters depict suburban life as one soaked in crass consumerism, commercialism, and the ubiquitous nature of mass media. Things go pear-shaped when a railroad car spill on the edge of town triggers an “airborne toxic event,” leading to an evacuation and the concomitant strain on the family unit.
Remember, this is branded a comedy. The comic thrust of White Noise comes from its supposedly scathing parodies of American middle-class life. Take the novel’s opening paragraphs, where Gladney observes the college’s students returning to campus in single file:
The roofs of the station wagons were loaded down with carefully secured suitcases full of light and heavy clothing; with boxes of blankets, boots and shoes, stationery and books, sheets, pillows, quilts; with rolled-up rugs and sleeping bags; with bicycles, skis, rucksacks, English and Western saddles, inflated rafts. As cars slowed to a crawl and stopped, students sprang out and raced to the rear doors to begin removing the objects inside; the stereo sets, radios, personal computers; small refrigerators and table ranges; the cartons of phonograph records and cassettes; the hairdryers and styling irons; the tennis rackets, soccer balls, hockey and lacrosse sticks, bows and arrows; the controlled substances, the birth control pills and devices; the junk food still in shopping bags — onion-and-garlic chips, nacho thins, peanut creme patties, Waffelos and Kabooms, fruit chews and toffee popcorn; the Dum-Dum pops, the Mystic mints.
You’re forgiven if you stopped reading halfway through and skipped down. You didn’t miss anything.
Critic B. R. Myers categorizes this manner of list-making as a symptom of “a tale of Life in Consumerland, full of heavy irony, trite musing about advertising and materialism, and long, long lists of consumer artifacts, all dedicated to the proposition that America is a wasteland of stupefied shoppers.” That’s pretty much what the first half of White Noise adds up to. There’s more of these dreary lists in the book, and plenty of tin-eared dialogue to boot, as evidenced in this exchange between Gladney and his wife:
“It’s not the station wagons I wanted to see. What are the people like? Do the women wear plaid skirts, cable-knit sweaters? Are the men in hacking jackets? What’s a hacking jacket?”
“They’ve grown comfortable with their money,” I said. “They genuinely believe they’re entitled to it. This conviction gives them a kind of rude health. They glow a little.”
“I have trouble imagining death at that income level,” she said.
“Maybe there is no death as we know it. Just documents changing hands.”
“Not that we don’t have a station wagon ourselves.”
“It’s small, it’s metallic gray, it has one whole rusted door.”
Or this moment—the most famous in the book—when Gladney’s school-aged daughter talks in her sleep:
She uttered two clearly audible words, familiar and elusive at the same time, words that seemed to have a ritual meaning, part of a verbal spell or ecstatic chant.
Toyota Celica.
A long moment passed before I realized this was the name of an automobile. The truth only amazed me more. The utterance was beautiful and mysterious, gold-shot with looming wonder. It was like the name of an ancient power in the sky, tablet-carved in cuneiform.
I suppose for a certain type of person, this is a scream, gold-shot and looming. I’m not that type of person.
It’s the phoniness of White Noise I can’t let go of. The excuse of “it’s a satire” does not forgive the writer from grasping and depicting the reality of a situation. The power of satire is to capture the genuine and turn its underbelly over to tickle it—to reveal its absurdities in both premise and execution. DeLillo never accomplishes this. Professors don’t inventory their students’ goods from afar; husbands don’t tell their wives that the station wagon has a junky door (when any wife would full-well know this); and if a daughter was repeating a car make and model in her sleep, no one would declare it a religious experience. The absurdity of White Noise is not the mindless consumers populating it, but that this novel somehow is considered a smart skewering of them.
Compare the above to George Carlin’s ridiculing of American materialism in his infamous “Stuff” sketch:
DeLillo’s range-finding jabs are timid compared to Carlin’s honed wit, from the basic observation that homes are just lockboxes for our precious objects, to the game-theoretic anguish of weighing which personal goods make the cut for an overnight excursion. He even indulges in his own Consumerland-like list (“Afrin 12-hour decongestant nasal spray”) that is far briefer, funnier, and better-curated than DeLillo’s weary catalogs. The laughs aren’t merely at Carlin’s on-stage antics, but in the gnawing sensation that we’re guilty of what he’s describing—and Carlin’s tacit admission that he’s guilty of it, too. Meanwhile, in White Noise, we’re supposed to be chortling at the mindlessness of our inferiors. DeLillo is othering America—for whose benefit? Why, Americans like him: Americans who deny their American-ness.
(In this sense, I suspect the Netflix adaptation will execute much like Adam McKay’s smug Don’t Look Up, a spoof also predicated on an America stupefied by cable television and fast food.)
It’s not merely the elitism that fails to connect. Gladney’s field of “Hitler studies” is never really fleshed out. It could have been a fascinating device (although it risked from page one falling into the trap of Godwin’s Law). As the book wears on, the Hitler studies thing feels like a gag DeLillo thought would reap comic gold, and only realized chapters in that the idea had run out of gas. The best he can do is have Gladney deliver a lecture comparing Hitler to Elvis Presley—there’s your Godwin’s Law at work. When Gladney admits he’s only recently learned German, you realize how thin the satire really is: This is not a real professor of Hitler studies.
When I say “Gladney is not a real professor of Hitler studies,” I don’t mean it in the same way that W. H. Auden said Shrike is not a real newspaper editor in Nathanael West’s Miss Lonelyhearts. Auden meant that Miss Lonelyhearts is not about newspapermen or journalism—the premise of a man taking a position as an advice columnist is merely a convenience to place the book’s heart-wrenching confessional letters into his hands. Gladney’s field is very much intended to satirize him and academia, but the joke is never explored and left unfulfilled. It becomes a shingle to hang around Gladney’s neck, doing precious little to inform his worldview or way of life.
The main course for White Noise, though, is the American bourgeoisie. The metaphysics of supermarkets are discussed by the book’s characters (always with a straight face). Death is discussed in excruciating abstractions and legalistic terms. The book concludes with Gladney looking out over a hazy dusk, the air thick with toxic chemicals, and admiring its beauty. (No—really.)
What’s the problem with Netflix adapting the book? In truth, I don’t care much one way or the other. What stunned me—and motivated those posts on social media—is that White Noise was always intended to be a sharp poke in the eye for middle America, with plenty of scorn reserved for major corporations and the mass media.
In other words, White Noise satirizes the type of corporation that’s adapting it into a movie, mocks the people that corporation will be marketing the film at, and despises the corporation collecting its profits as the mindless mob watches on from the comfort of the sofas in their McMansions, with their living rooms, their family rooms, their bedrooms, their candy rooms, their office rooms, their great rooms.
In The Breakfast Club, introverted Allison dares rich-girl Claire to say if she’s a virgin. When Claire demurs, Allison says,
It’s kind of a double-edged sword isn’t it? … If you say you haven’t [had sex], you’re a prude. If you say you have, you’re a slut. It’s a trap.
This is how I feel when the question comes up about the distinction between literary and genre fiction. If you write literary novels, you’re a prude. If you write genre books, you’re a slut.
Is it really that simple? Nothing in this world is so simple. Yet, here are some true-life examples from my own experiences:
Prude
While shopping around my first novel, I got a tip that a prestigious national imprint had a new editor seeking fresh manuscripts. I sent mine along, hopeful but also realistic about my chances.
The rejection slip I received was fairly scathing. The editor claimed my book read of a desperate MFA student who doesn’t understand the “real world.” It was fairly derogatory (and oddly personal, considering this editor and I shared a mutual friend). A simple “thanks, no thanks” would have sufficed, but this editor decided it was my turn in the barrel.
Make no mistake: This hoity-toit imprint reeks of MFA aftershave. It’s not a punk-lit imprint. It’s not an edgy alt-lit imprint. It publishes high-minded literary fiction. The author list is upper-middle- to upper-class, blindingly white, and yes, many of them hold an MFA.
And I hold an MFA too, so perhaps the criticism is spot-on—except I wrote the bulk of novel before I set foot in grad school. I didn’t aim for it to be a literary masterpiece. I wanted to write a page-turner. It’s categorized as literary fiction because it’s not mystery, science-fiction, fantasy, romance, Western, thriller, or YA/New Adult. Write a story about a character and his family, and it’s not merely literary, you’re trying to “be literary.” Who knew?
In my novel, the main character has grown up in a town of physicists who design and perfect weapons of mass destruction—this is the actual childhood I experienced. I thought it would be a good read. (It is a good read.) My character is snarky, sarcastic, crude—and at times, he can be a right asshole. The technical background of the novel is, as they say, ripped from the headlines.
This seems pretty real-world to me. I thought I was writing a funny novel with an unusual setting and situation. This editor took it upon herself to declare I’m actually a Raymond Carver-esque hack penning quiet stories of bourgeois desperation. And that I should stop being that writer.
So, there’s the rejection slip telling me to quit being literary, even though that’s a categorization I never asked for. And it came from a literary publishing house. It’s kind of a double-edged sword, isn’t it?
Slut
After Amazon published my second novel, I began to sense a change in the attitudes of many of my writer friends. At first it was slight, like a shift in air movement when a door in the room is opened. Gradually, though, the emotional tension grew to the point it could not be denied.
I wondered if the problem was one of jealousy. My book had been picked up by a large company, but Amazon was not what you would call an A-list publisher (back then, at least—times have changed). And, they only published my book in digital Kindle format. I had to rely on CreateSpace to offer a paperback edition. The advance money was not huge, and the publicity not so widespread. It all seemed pretty modest to me, and I thought my friends would recognize it as such.
My novel is set in an alternate universe where human reproductive biology is tweaked in a rather significant way. This book is obviously science-fiction. Since the protagonist is a thirteen-year-old girl, it neatly fits into the YA slot as well.
And I’m comfortable with those categorizations. I grew up reading Asimov, Bradbury, Silverberg, and other science-fiction writers of the Golden and Silver Ages who laid so much groundwork for the genre. More importantly, I wanted to write another page-turner, a real unputdownable book. From the Amazon reviews, I think I succeeded.
The tip-off for the issue with my friends was when my wife asked one of them if she’d read my new book. The answer was a murmured, “I would never read a book like that.” This from a person I counted as a friend, and had known for ten years.
Before this, I’d heard her repeat the trope that all genre fiction is formula, as mindless as baking a cake from a box of mix. I always let it go, for the sake of harmony. Now it was being thrown in my face.
The funny thing is, one Amazon editor told me she felt in hindsight my science-fiction YA novel was not a good fit for their imprint. They were more interested in “accessible” genre fiction for their readers, and that my work was—yep—too literary. It’s a trap.
Tease
When Claire refuses to reveal if she’s a virgin, bad-boy Bender suspects she’s a tease:
Sex is your weapon. You said it yourself. You use it to get respect.
Between being a literary author and a genre writer, there’s a third way: The literary-genre writer. These are the teases. They write genre fiction, but make it literary to get respect. And, often they do.
Examples of teases are Haruki Murakami, China Miéville, Cormac McCarthy, and Margaret Atwood. Much of their work is patently genre, but they are received and analyzed with the same awe and respect reserved for literary novelists.
The knee-jerk reaction is to say these writers prove it’s possible to write literary-genre fiction. I don’t think that’s true at all, though. It only proves that authors accepted into the literary realm get to have it both ways: They avoid the stigma of genre fiction while incorporating the high-stake dramatic possibilities genre fiction offers.
Consider another literary-genre writer: Kurt Vonnegut. He wrote science-fiction, but his books are rarely shelved in that section. Hell, he even wrote a diatribe about how bad science-fiction writing is (Eliot Rosewater’s drunken “science-fiction writers couldn’t write for sour apples” screed). Yet, Vonnegut is rarely, if ever, permitted into the same circle as Atwood or McCarthy. There’s something “common” about Vonnegut. Only at the end of his life was he cautiously allowed into the literary world. Some still say he doesn’t belong there.
I remain unconvinced it’s the sophistication of a novel itself that moves it into the upper literary tiers. I can point to plenty of books supposedly in the literary strata that are not exceedingly well-written or insightful. Something other than an airy quality is the deciding factor.
The success of a handful of literary-genre writers doesn’t open doors, it only creates a new double-edged trap. An author who pens a literary-style novel can claim it’s literary. See, he added his book to the “Literary Fiction” section on Amazon! But does it mean he’s a member of the literary world? Not at all. There’s something else holding him back.
The trap
The literary/genre distinction purports to explain every aspect of a story: Its relevance, its significance, its quality, its audience, even the goals of the writer when they sat down to write it. Nothing in this world is so simple.
There’s a smell about the literary/genre divide. It smells like class. Literary is upper-class, and pulpy genre is for the proletariat. This roughly corresponds to the highbrow/lowbrow classifications. We even have a gradation for the striving petty bourgeoisie, middlebrow.
(Even calling a novel “middlebrow” is treated with disdain—a lowbrow attempt to raise a genre book to a higher status. It’s easy to fall down the literary/genre ladder, but difficult to ascend.)
I definitely believe the Marxist notion of class exists, both abroad and here in the United States. What I don’t believe is that a work of fiction is “of a class.” Books are utilized as a marker of class—tools to express one’s status. Distinctions like literary vs. genre communicate to members of each class which books they should be utilizing…I mean, reading.
This is not the most original thought, but is it really that simple? Nothing in this world is so simple. And I don’t want it to be simple. As with food, the best reading diet is varied, eclectic, and personal.
Note the real damage here. If a writer writes the books he or she wants to write, and puts their heart and soul into making it the highest-quality they can for their readers, all that hard work is instantly deflated by the literary/genre prude/slut highbrow/lowbrow labels.
And if a writer introduces genre conventions into their literary work, they’re a sell-out—a prude tarting it up for cheap attention. And if the author of a genre novel tries to achieve a kind of elegance with their prose and style, they’re overreaching—a slut putting on a church dress. You use it to get respect. We’re punishing people for being ambitious.
I’ve said it elsewhere: People will judge a book by its cover, its publisher, the author’s name, the number of pages, the title, the price, the infernal literary/genre label, its reviews, the number of stars on Amazon—everything but the words between the covers. You know, the stuff that matters.