Writing with the Robot: Three Random Observations on Creating Text Hand-in-Hand with AI

writing with the robot

Generative AI and I are just getting to know each other. Right now, I'd describe our relationship as hesitant dating. I'm still a bit too wary to invest a lot of time or cash in getting to know AI at a deep level. But I have become curious enough to have the equivalent of a few coffee chats.

For years, I've put off even shaking hands with this new technology, thinking (arrogantly) that I, a "professional writer," didn't need any new-fangled help. Back in the early 2000s, when I was a full-time communications professor, I enjoyed shocking students by telling them that I never used the spell-checker—because I had actually learned how to spell. Imagine that!

Flash forward a couple of decades, and a few embarrassing typos, and I now will sometimes allow AI to auto-suggest edits to my writing. I tend to keep editing tools locked down until I’ve finished a first draft, so I can avoid interruptions while I’m in the creative flow. But I’ve learned the value of letting Microsoft’s Editor give me its opinion before I finalize a document.

Here are three very unscientific observations based on my limited experiments with AI over the past few weeks. Stay tuned for more on this topic because I’ve just enrolled in Jennifer Goforth Gregory’s course on AI productivity tools for writers. (Thanks Jennifer Goforth Gregory for putting this resource together!)

Observation #1: Microsoft’s Editor is becoming smarter

Strong writing is like a piece of handmade furniture. Its beauty comes from irregularities. Like a knot in a piece of wood, an unusual word or unorthodox syntax gives writing character.

Until recently, I found my occasional experiments with Microsoft’s Editor frustrating because they tended to grind down the texture of my sentences, making them more “correct” but less compelling. Over the past few months, however, I’ve found that the Editor has become more sophisticated and less cranky. (This evolution has, of course, been in progress for a long time, though I have been slow to discover it.) As a result, I’m now starting to incorporate it into my QA process.

As contender for the title of Producer of the World’s Wordiest Drafts, I appreciate the way AI encourages me to tighten my phrasing. For instance, yesterday it prompted me to change “you have to [do such-and-such] to “you must,” a much leaner option.

The Editor also does a good job of picking up “grammar glitches” that can be hard to spot, such as niggling usage issues or a subject-verb disagreement in a sentence where the subject is placed far from the verb. (For example, how easily can you detect the two technical flaws in this sentence? The number of survey respondents, while fewer than expected and late to be processed, were still well within the anticipated range. Check the footnote for the answer.)[1]

In grad school, one of my most embarrassing moments was handing in a paper (on Francis Bacon, if I recall) that started with a dangling modifier.[2] I can still see the big red circle around the first sentence on the first page of the graded essay. Microsoft Editor should now save me from such ignominy. (It might also encourage me to replace “ignominy” with a more common word, such as shame or disgrace, but Editor and I will continue to have words about the way that forced Plain Language can rob communication and thinking of nuance.)

Observation #2: LinkedIn’s AI has a long way to go

Copywriting is not my strong suit, and I find headlines especially challenging. So I was keen to try LinkedIn’s AI to optimize my profile. However, when I tried this tool a few weeks ago, the results weren’t just disappointing—they were laughable.

For example, one of the legible headline suggestions was “Dawn Henwood, English PhD.” Several other headlines came out as word salads that didn’t make any sense at all.

When I tried out the tool on my About section, which is definitely due for a revision, it produced text that completely misrepresented my skills and the value I offer clients. It did amp up the language by inserting power words, but the result was a blurb that lacked both specificity and depth. It could have been talking about breakfast cereal.

Copywriting is a form of text production that relies heavily on time-tested formulas, so we should see better and better tools to help with this kind of writing. However, I’m not holding my breath waiting for LinkedIn’s tools to improve my profile.

Observation #3: Canva’s Magic Write is like a thesaurus on steroids

Today was one of those times when I sat down at my desk feeling about as creative as a wet dishrag, so it was the perfect time to check out Canva’s AI tool for copy. I found the experience entertaining and mildly useful. At the very least, it reminded me how the energy of a piece of writing depends on the quality of its verbs, a lesson I often preach but just as often forget when I’m in drafting mode.

My task was to produce a flyer describing a Clarity Connect program for undergraduates and recent grads, the Liberal Arts Passport to Innovation. The program is running for the third time at the University of King’s College this winter, and we are getting ready to launch our first public offering in February.

Here was my problem. As an educator and learning designer, I tend to portray the value of a program through learning outcomes, and the language I use is rigorously clear but seldom sexy. So my first draft of the bullet points describing the “promise” of the program led with dull verbs, such as “recognize,” “identify,” and “articulate.”

Technically accurate, but hardly appealing to my target audience of Gen Z students and their parents. “Let’s see what you can do, Magic Write,” I said.

I asked the tool to make my writing “more fun,” and—pouf!—my list of learning outcomes transformed into something that P.T. Barnum might have written. Instead of identifying nontraditional career paths, students were now empowered to discover a “new-fangled maze” of intriguing career options. Instead of creating a LinkedIn profile, they were told they would “showcase their awesomeness.” And overall, the program’s new promise wasn’t to uncover new career paths but rather to “set your heart ablaze” with new possibilities.

While I chuckled over the hyperbole and some of the mistaken metaphors, Magic Write gave me an effective refresher course in the virtue of emotive language, especially verbs. I didn’t adopt the concept of a “maze” to describe a career path, since that sounded illogical to me, but I did jazz up all the verbs in my outcomes list.

Going forward, I’ll continue to use Magic Write as I would an amplified thesaurus. And as with any thesaurus, I would advise caution. Novice writers often run into trouble when they try to diversify their vocabulary by looking up synonyms because they lack the judgement required for accuracy. This can lead to distorted, sometimes hilarious results.

For example, let’s say you’re writing an email to a colleague, recounting how a client, a large man with a gruff voice, burst out laughing in response to a proposed sales ad. The statement, “Gus laughed” seems a bit tired, so you locate a list of alternatives at Thesaurus.com. Without taking the time to check the definition at Dictionary.com, you swap out “laughed” for “tittered,” a verb that would be appropriately applied to Gus’s 90-pound, 90-year-old grandmother.

Magic Write requires even deeper due diligence as the AI rewrites sound snappy and pull out all the emotional stops. But a wise writer will examine each phrase and word to ensure that the meaning and the tone truly align with their intent and the effect they want to produce in their audience.

I’m looking forward to further experiments with AI. In 2024, I expect that my skeptical dating relationship may develop into something more substantial as I continue to explore ways that technology can augment human creativity.

How are you using AI in your writing? I’d love to hear about your experiences!


[1] The first flaw is the use of “fewer” instead of “less.” The second flaw is the use of the plural verb “were” instead of the singular “was.” Corrected, the sentence should read as follows: The number of survey respondents, while less than expected and late to be processed, was still well within the anticipated range.

[2] In case you’ve never committed such a grammatically shameful faux-pas, a dangling modifier is a phrase that gives information about a subject that isn’t actually named in the sentence. For example: After conducting the second clinical trial, the drug was ready for the market. In this instance, the modifying phrase “After conducting the second clinical trial” refers to the company or researchers running the test. The drug could not logically conduct its own trial, and yet that’s what the sentence technically says happened.


There are no comments yet. Be the first one to leave a comment!