Is AI useful to Technical Writers?

Since the head-spinning rise of ChatGPT, most people simply refer to the current crop of generative AI large language models (LLMs) as "AI." LLMs are powerful systems trained on extensive data and have opened new avenues in many fields, including journalism, creative writing, coding, and even visual design. However, when it comes to writing documentation, their use continues to be complex and nuanced.

While generative AI is immensely helpful in some areas, it faces critical challenges in others, particularly due to its inability to validate the truth of its own outputs. In this post, I’ll explore the specific strengths and limitations of LLMs in the domain of technical writing and some best practices for using them effectively.

I should also say up front that I'm currently employed as a Technical Writer, so I have some understandable biases when it comes to this subject. However, I am definitely not in the camp of people who dismiss AI outright. In fact, I'm currently developing a web application that uses generative AI in some use cases.

There is also some early data to suggest that AI does help significantly with the documentation process. Of the categories they surveyed, Google's 2024 DORA report shows that companies that have been adopting AI have seen the greatest impact on documentation quality. 

This is to say that I think I'm somewhere in the middle of the debate about using AI in Technical Writing and possibly even have a balanced view on these questions.

Brainstorming and drafting

One of the greatest strengths of LLMs for writers is their brainstorming and content drafting capability. LLMs can provide a solid starting point, quickly generating outlines, topic suggestions, or even full drafts that writers can work from. This is particularly useful for complex topics where initial structuring can be challenging.

For instance, an LLM can generate a rough outline for a user manual, suggesting sections based on common documentation standards. It can produce a draft explaining the steps for setting up software, giving Technical Writers a preliminary structure they can refine and verify. Similarly, LLMs are useful for generating lists of common questions for FAQ sections, creating templates for release notes, or suggesting organizational methods for documentation.

However, as valuable as these capabilities are, writers must treat LLM-generated content as a draft rather than a finished product. The information provided will be determined largely by the particular datasets used to train the model. Also, due to the limitations of the models, the text may be incorrect and must be verified and edited by a human expert.

Assisting with repetitive tasks

Another advantage of LLMs is their ability to assist with repetitive or tedious tasks. Technical writers often spend considerable time creating variations of similar content or rephrasing information to suit different documentation types, such as user manuals, developer guides, and troubleshooting documents. LLMs can accelerate these tasks by quickly generating rephrased content, summaries, and variations, allowing writers to maintain consistency across documents without unnecessary repetition.

For example, a Technical Writer might use an LLM to generate different versions of error messages for an application or to suggest alternative explanations for complex technical concepts. Although these outputs still require careful review and editing, they provide a time-saving first draft, freeing writers to focus on more intricate tasks.

The editing challenge

LLMs today are terrible at editing because they have a fundamental flaw—they don't know what's true. All they can do is guess, based on their training, what is most likely to be true. Their responses are based on patterns within their training data, which means they’re prone to generating plausible but incorrect information. These mistakes are common and have come to be known as "hallucinations." Editing technical content requires a meticulous approach, often involving multiple layers of fact-checking and validation. Unlike journalism or fiction, where creative language is an asset, technical writing demands clarity, precision, and accuracy.

My most recent experience with an LLM-based image tool, DALL-E, illustrates this point well. When generating the image of a robot typing that I used for this blog post, the tool failed to recognize that the paper was backward and that the title on the page contained a spelling error. This sort of fundamental error underscores the fact that generative AI operates on pattern prediction rather than comprehension. The same principle applies to written content generated by LLMs: they don’t “know” what’s accurate; they simply generate text based on the likelihood of it fitting the given context.

But wait, you might argue that LLMs are being used more and more for coding. In fact, Google CEO Sundar Pichai recently said, "More than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers.” Surely, if an LLM can code, it can also edit a document. That might sound reasonable, but it's an entirely different use case. Unlike coding, where you can run tests and experiments on the code that LLMs write, someone who lacks the ability to edit also doesn't have the skill to check the LLM’s work.

The example I mentioned with the error in the image of the robot is funny, but it doesn't really convey the difficulty of using LLMs for editing. The problem is not that they make huge errors that are easy to spot; it's that they will make mistakes at unexpected times.

To illustrate this point, I gave ChatGPT some text to correct with a short list of style guide rules to follow. The rules were standard in many style guides I've used in the past. For example, don't use contractions, use active voice, and use American spelling. The text I gave to ChatGPT to correct included this line, "There is a scene in The Muppet Movie where a bike is ridden by Kermit."

When ChatGPT edited the text, it came back with, "There is a scene in The Muppet Movie where a bike is ridden by Kermit." Now, obviously, that's a problem, so I asked ChatGPT, "Does the last sentence contain passive voice?" The response from the LLM was, "Yes, the last sentence contains passive voice. The phrase 'a bike is ridden by Kermit' is in passive voice. To convert it to active voice, it could be rewritten as: 'Kermit rides a bike.'" I then gave ChatGPT the same text and rules again, and it came back with another version of the sentence, "In The Muppet Movie, Kermit rides a bike." The new version corrected the original error, but now the information contained in the sentence has been changed. Perhaps that would be OK in this case, but if a person relies on the LLM for corrections, would they have caught the original error and the altered sentence in the second case?

This is why AI should not be relied on for tasks that demand rigorous editing or verification. Writers must still be prepared to thoroughly review and fact-check any LLM-generated content, especially for technical details. Users who lack editing skills will find it challenging to validate this content accurately, making it critical to view LLM outputs as drafts that require additional human input.

Assisting with coding and scripting

A unique strength of LLMs in the technical writing sphere is their utility for coding and scripting tasks. In cases where documentation requires sample code, LLMs like ChatGPT or AI-powered editors like Cursor can produce code snippets, outline basic functions, or assist with API documentation. Technical Writers who understand programming can experiment with and test the code generated by LLMs to ensure it performs as intended.

For example, if a Technical Writer needs a script that automates data extraction from a specific API, an LLM can help generate a Python script for this purpose. The writer can then test the code, correct any errors, and integrate it into the documentation. This allows writers to save time on programming tasks without compromising on quality as long as they have the technical skills to test and refine the output.

The importance of human expertise

The capabilities of LLMs can tempt some users to treat them as a substitute for Technical Writers or subject matter experts (SMEs). However, technical writing relies heavily on specialized knowledge and the ability to communicate with precision.

Human Technical Writers bring an irreplaceable skill set to the table, including familiarity with the industry, knowledge of best practices, and the ability to communicate clearly and accurately. They understand the nuances of their audience’s needs and can ensure that the documentation is both user-friendly and technically sound. LLMs can augment these skills, but they cannot replace them.

In fact, one of the most effective uses of LLMs in technical writing is as a tool for enhancing the writer’s workflow. LLMs can help experienced writers brainstorm, generate content quickly, and complete repetitive tasks, but the final output must be shaped and validated by someone with domain knowledge. The role of the Technical Writer remains crucial, and LLMs are best viewed as an addition to the writer’s toolkit rather than a replacement.

Before moving on, I would like to point out that there are other reasons why people may choose not to use AI to aid their writing. The most famous of which is that the companies who created these powerful AI models are being accused of plagiarizing content from many different sources. That is a topic worthy of its own post, so I'm not going to dive into it here.

Tips for using LLMs in technical writing

For Technical Writers interested in using LLMs effectively, here are some best practices:

  • Use LLMs for drafts and brainstorming: LLMs can help create outlines, generate content ideas, and produce rough drafts. Start with these drafts and refine them to ensure clarity and accuracy.
  • Do not rely on LLMs for accuracy: Remember that LLMs cannot verify the truth. Fact-check any technical information they generate, especially for complex or specialized content. Always view content generated by LLMs as a first draft requiring human review. This approach ensures quality control and mitigates the risk of errors.
  • Leverage LLMs for coding tasks: If your documentation involves coding, use LLMs to assist with code snippets and scripts. Be sure to test all code outputs thoroughly.
  • Continue to collaborate with SMEs for complex topics: For highly technical or niche subjects, consult with SMEs to confirm accuracy. LLMs may generate plausible-sounding information, but they cannot replace expert verification.

Final thoughts

Generative AI can help writers streamline workflows, manage repetitive tasks, and generate draft content quickly. But the nature of technical writing demands accuracy and reliability—qualities that LLMs, for all their sophistication, cannot guarantee.

The key to leveraging LLMs in technical writing is to use them as supportive tools while maintaining the essential role of human expertise. Generative AI can simplify parts of the writing process, but it does not replace the need for experienced, knowledgeable Technical Writers who understand their audience and their subject matter. As LLMs continue to evolve, their role in technical writing will likely grow, but human oversight and expertise remain the foundation of high-quality documentation.

Written by: Stephen Cawood

Comments

Popular posts from this blog

What is Docs-as-Code?

You're technical enough to use Docs-as-Code