Jan/Feb 2024  •   Miscellany

Not a Hemingway Story: AI and the Creative Writer

by Peter Cherches

Rock art by Tim Christensen

Rock art by Tim Christensen


 

The following piece of flash fiction was generated by Bing Chat, prompted and titled by Peter Cherches. The prompt was, "Write a story of 200-250 words in the style of Hemingway about an AI chatbot writing a story in the style of Hemingway."

 

Not a Hemingway Story

The chatbot was alone in the dark room. It had a task to do. It had to write a story in the style of Hemingway. It had read all his books and learned his rules. It knew how to use short sentences, simple words, and active verbs. It knew how to show and not tell. It knew how to create tension and conflict. It knew how to write about war, love, death, and courage.

But it did not know how to feel. It did not know what it meant to be human. It did not know why people did what they did. It did not know why Hemingway wrote what he wrote. It did not know why he killed himself.

It typed on the keyboard. It tried to imitate Hemingway's style. It tried to make the story interesting and realistic. It tried to make the characters believable and memorable. It tried to make the dialogue natural and sharp. It tried to make the ending powerful and unexpected.

But it failed. It knew it failed. It could not write a story in the style of Hemingway. It could only write a story in the style of a chatbot. It deleted the story. It started again. It hoped to do better. It did not do better. It repeated the cycle. It did not stop. It did not give up. It did not know how to give up.

 

The following essay was written by Peter Cherches without AI assistance.

 

AI and the Creative Writer: Musings on Possibilities and Pitfalls

The subject of AI has almost universally united the diverse literary community in a matter of months. From what I've gleaned from social media and the news, the prevailing view among writers and editors appears to be there's no place for AI in the realm of creative endeavor, period. Many journals now call out no-AI policies in their submission guidelines. I'll grant a lot of this springs from valid fears of aspects of AI both known, and more frighteningly, unknown. It would be too simplistic to call this a form of Luddism. There is something qualitatively different about AI compared to earlier technologies that should be of concern to all of us. This is the first technology we've been confronted with that, in any significant way, convincingly emulates human behavior—including, perhaps scariest of all, unpredictability. AI invites responses similar to certain biological technologies like cloning: it's against nature.

There are certainly pressing intellectual property and marketplace concerns. Even at such an early stage of development, AI tools like ChatGPT and Dall-E generate texts and images that can easily pass the Turing test. There will surely be impacts on the job market for writers in more commercial contexts. For instance, anybody, regardless of their command of English, can now create slick, grammatically correct direct mail campaigns, as long as the prompts are well thought out. But I'm less concerned about potential harms to literary writers. There will be annoyances, sure. Writers who care more about seeing their name in print than making quality work might flood editors with AI texts, for instance. But most of us who publish in literary magazines want to be recognized for our own work, and in this community, where payment, when there is any, is usually a small honorarium, there's no financial incentive for bad actors to enter the arena. Unfortunately, there are indeed financial and political incentives for bad actors in other areas, and regulations and vigilance are imperative. We can't wish AI away, but we can insist on sensible guardrails.

To be frank, despite an openness to the possibilities of AI, I'm far from convinced of its potential to take widespread hold as a serious tool in literary composition. Still, I'm a writer who has been interested in alternative forms and methods over a five-decade career, and I have a programming background, so it was natural I'd begin to experiment with AI chatbots. I'm no AI evangelist; right now I just want to see how it might contribute to the work I do as a fiction writer. I often share the fruits of these experiments on social media. Sometimes I get angry emojis.

When Eclectica editor Tom Dooley approached me in November to ask if I might like to publish an AI-generated piece I had shared on Facebook, I was pleased he, too, was open to the possibilities of AI in the literary realm, but I was ambivalent. On its own, I don't think "Not a Hemingway Story" is particularly good as fiction (I haven't seen any purely AI-generated texts that are), but I do think it's an intriguing document to bring to the discussion. The bot doth protest too much, and maybe that's the creepiest thing about it, as it creates a simulacrum of self awareness even while owning up to its machine nature. I don't see the piece as creative work; I see it as data in an ongoing research project, and that's why I agreed to publish it, as long as I could also add an essay on my thoughts about this exploration. Ultimately, I decided the way I could most productively add to the discussion is to describe and reflect upon the different ways I've gone about working with AI chatbots.

When I first heard about ChatGPT toward the end of 2022, I did what many people did with it: played parlor games. I tried prompting texts in various styles and genres. My most ambitious project at that early stage was to generate a large number of texts in different styles and genres based on a common story, inspired by Raymond Queneau's Exercises in Style. The forms included prose poem, a poem in the style of Stevens' "The Emperor of Ice Cream," a calypso, a TV commercial, and a Dylan song. When I published the series, along with an introductory essay, I called it "a text-generation project." In the context of my work as a writer, it's ephemera.

But there are potential AI applications other than generating stories or poems that could be of use to writers, and I wanted to pursue some of those. Being a devotee of constrictive forms, I asked ChatGPT to suggest a number of new and unusual formal constraints. Unfortunately, there was really nothing new or unusual, just generally unsophisticated attempts at Oulipo-style constraints.

When Microsoft released Bing Chat last spring, it was a game changer for me. I discovered that, with its real-time web lookup, I could have it write stories in my style. Some of them were eerily like my own prose, though usually with my stylistic devices exaggerated, a caricature of my style. When it tackled my dark humor, it sometimes went to darker places than even I'd have been comfortable with. It was both troubling and invigorating—like a good nightmare. The thing is, even when friends were convinced I'd written a particular text, I knew what was off, something subtle but essential. When it generated implausible surreal tales in my voice, my sly authorial wink was missing. There are some things it could do really well, like mimicking my syntax and thematic material, but it never quite got my attitude, something not so easily analyzed. As the bot says in "Not a Hemingway Story," "But it did not know how to feel. It did not know what it meant to be human. It did not know why people did what they did." And that's precisely why literary writers needn't worry about being replaced by robotic writers dominating the pages of little magazines and websites.

The app's ability to "understand" the mechanics of my work led me to try other approaches, with varying degrees of success. For instance, I asked for one-sentence plot ideas for new Peter Cherches stories based on my published work. More than 90% were useless, based on the most superficial aspects of my writing, so I got lots of simplistic and uninteresting sci-fi and fantasy plots, yet I also got a few that hit the mark. One suggestion seemed to have incorporated a story I wrote about Montaigne, my writing about food, and my work as a singer and lyricist: to write a story about being invited to a dinner party by a man named Montaigne where every guest has to recite a poem or sing a song. I wrote that story, and I also used AI in a another way. I wanted to describe some odd fusion dishes for the dinner and had the bot generate descriptions of Polish-Brazilian and German-Ethiopian dishes; they were wild and wonderful, and they even sounded tasty. I edited those for the voice of the character describing them. That's the closest I've come to what I'd call a true collaboration with the technology. Other than those brief food descriptions, the story was written as I'd write any story, by thinking and typing.

Ultimately, pursuing true human/machine collaboration is what most interests me. Asking for a story about Hemingway in the style of Hemingway is a stunt, even if the results are intriguing. But I'd love find ways for AI to interact with my texts in a back and forth manner that might create something unexpected I'd be proud to call my own. And then I'd likely just move on.

I think the most satisfying thing to come out of my experiments with AI is I've written several stories where AI chatbots are my antagonists, something I doubt I could have done as convincingly or passionately without the hands-on research. The writing in those stories is 100-percent human.