Today we’re going to look at how to create technical documentation using ChatGPT, or you can use any LLM—it doesn’t matter—Cloud, Google Gemini, DeepSeek, all the same really. And what I’m going to do is kind of show you a few tactics, so to speak, as far as how you can approach the writing.
So this is the first one of many tutorials, just try to get an idea of what we’re trying to do here. And then what you can do is adapt what we’re doing here to your own documentation. So I’ll show you what I mean and we’ll kind of get into it.
Watch the full video on the Klariti YouTube channel.
Prompt 1: Refine the Writing
To start, I have a chunk of text which is intentionally kind of poor quality stuff. It’s okay, it’s fine, but we want to make it better.
What I’ve done is I’ve given it an instruction which are three things here:
- I’ve asked it to edit the paragraph to be 30% shorter.
 - Secondly, I’ve asked it to put it into the active voice.
 - Third, I’ve asked it to write it at an eighth grade reading level.
 
Essentially what I’m trying to do is telling it to adhere to good technical documentation practices—that’s really it.
Before I go any further, something to consider too when you’re using the LLMs is the amount of text you will give it will affect what you get back.
We’ll go into that in the future tutorials; just kind of bear that in mind for the time being.
So it gives me back this, which is passable, it’s fine. You know, this is a serious improvement on this kind of long-winded chunk up here. A few good things about it:
- It’s written in the active voice, like it tells you who’s doing what.
 - It’s removed all kind of fluffy language, so this long-winded stuff here like “it should be noted by the user.” It’s tightened all that up.
 - It’s removed a lot of long sentences, made it more concise.
 - And it’s got it at the right reading level, even for the people we’re aiming it towards. So, a serious improvement on what we had before.
 
Prompt 2: Extract Key Information
What you can do with the text it’s given you is you can extract more information from it. And this is just a very simple example, but what I’m doing here is I’m asking it to identify three things inside the paragraph of text that I wanted to pull out. It’s went through the text over here and it’s pulled out these three examples of the most common serialization formats.
Now, you would probably know that if you’re writing documentation, but what it’s doing here is it’s giving us, giving you, an idea of how you can leverage in the real sense the LLM to identify things in the documentation source to make your material that bit richer. So we have three examples here.
Prompt 3: Add an Analogy
Now the next thing we can do in this example is we’re going to ask it to rewrite the section and to use an analogy. What I mean by that is that instead of giving somebody a chunk of text which is no context and has no frame of reference, which can be hard for the reader to understand, you can ask the LLM to come up with some kind of analogy for you and to weave that into the text itself.
That’s a pretty nice way you can approach enhancing your documentation to give it that entry point for the readers.
So it’s done that and it’s made a passable attempt to try to explain how it works. Here is one: you’re collaborating with the LLM through iteration, you’re going to improve it. And then I would suggest that you **keep a library of these prompts** someplace that you can refer to going forwards and you can update those as they go forward.
Watch the full Klariti tutorial here: http://www.youtube.com/watch?v=iL5rC5-NWDI

