Given I spend a great deal of time reading stuff off of that there internet, I thought it might be useful to publish some of the traits of large language models’ production of text. By learning some of these, not only can we recognise the presence of AI slop so much more easily, but improve our own writing.
It seems odd that we’ll define writing with examples of what-not-to-do. But in general, creating text that sounds like anything other than AI slop is a big win for the internet in 2026.
The most obvious trait in LLMs’ outputs is that it’s grammatically krekt by many definitions, and there are no spelling errors. Thus, I’d advise writers to make some mistakes. Spell-checking is a thing, of course, so maybe spelling errors get a pass. But a few errors in grammar and punctuation are fine and literally show that we’re human. I should know – my grammar is pretty crappy. What a very odd world: Write badly to differentiate yourself from the execrable output of ChatGPT.
I suppose I’d rather read something that’s badly-written by a human than something that’s well-written (kinda) by an LLM.
Cut it out
Large language models tend to be very verbose and produce text that’s repetitious and full of hyperbole. So a good way not to be one is to pare your own text back as much as you can while still preserving the meaning and sentiment of the first draft. The way to do this is to manually go through every sentence of your finished work, and try and remove as many words as possible.
The first things to go might be adjectives and adverbs. These aren’t necessarily bad words, but there’s a tendency to use three badly chosen adjectives or short, descriptive statements where one well-chosen word would be better. Then there’s general sentence construction. Can we rephrase a sentence to make it shorter?
To give some examples, here’s a passage of text from OpenAI. OpenAI is a company that’s, shall we say, more likely than most to deploy an LLM to write for it.
Companies are already overwhelmed with the disconnected systems and governance spread across clouds, data platforms, and applications. AI made that fragmentation more visible, and in many cases, more acute. Agents are now getting deployed everywhere, and each one is isolated in what it can see and do. Every new agent can end up adding complexity instead of helping, because it doesn’t have enough context to do the job well.
Clearly, there’s a lot wrong with the above in terms of its message being utter crap designed only to extend the window in which OpenAI can continue to raise new loans. But putting that aside for a moment, let’s use the opening sentence as an exercise in green-penmanship.
Companies are already overwhelmed with the disconnected systems and governance spread across clouds, data platforms, and applications.
We break down the first sentence. It’s trying to say that:
- companies are struggling with some things, which are, apparently:
- disconnected systems [I think it means bits of software that don’t talk to each other]
- distributed governance [I think it means the multiple sources of rules governing said software and the data in them]
(Technologically speaking, the sentence in question doesn’t actually make a great deal of sense. But since when did that stop anyone whose mission is to schill their tech product?)
This is what I think it’s trying to say, and it’s shorter:
Unconnected, distributed systems and governance make life difficult for companies.
Note the difference between disconnected (which I would interpret as something that was once connected having its cables cut) and unconnected (I imply that it’s a system that simply isn’t connected to any other system). Regardless. Here’s part two of our para.
AI made that fragmentation more visible, and in many cases, more acute.
But hang about! We have a past tense here: “made that fragmentation more visible”. But the opening sentence is in a present tense: “Companies are already overwhelmed with the…”.
So, to avoid a tense hop, we should have:
AI makes that fragmentation more visible…
So now we’re getting somewhere. Next up is this concept of something becoming more visible. There are lots of better words and phrases. Here are some of them, in context:
AI throws that fragmentation into relief… AI makes that fragmentation [obvious,apparent]… Awareness of that fragmentation dawns because of AI… The effects of fragmentation are exacerbated…
I’d really like something hoving into view, as I loves me a bit of hoving. But I’ll kill that idea. Seriously, I’d like the word ‘compounded’ here. ‘Compounded’ encompasses the idea that things are ‘more acute’.
AI compounds the problem [of fragmentation].
So now, we have an opening sentence that reads:
Unconnected, distributed systems and governance make life difficult for companies, and AI compounds the problem.
Our prose is suddenly free from extraneous fluff, therefore more punchy, and it’s decidedly less pro-AI. (How did that happen? AIs go through a constant set of secondary training exercises that adjust the algorithm’s weights and biases, and ChatGPT is, of course, made biased to ensure that anything AI-related gets the proverbial thumbs-up.) So, in the spirit of the original, perhaps we might pick a variation on what we’ve produced so far so it’s more pro-AI? Not on my watch, thank you very much.
Anyways, how about the next part?
Agents are now getting deployed everywhere, and each one is isolated in what it can see and do. Every new agent can end up adding complexity instead of helping, because it doesn’t have enough context to do the job well.
Here’s my version:
Agents act in isolation, causing problems due to a lack of contextual awareness.
Finally then, here’s the original again:
Companies are already overwhelmed with the disconnected systems and governance spread across clouds, data platforms, and applications. AI made that fragmentation more visible, and in many cases, more acute. Agents are now getting deployed everywhere, and each one is isolated in what it can see and do. Every new agent can end up adding complexity instead of helping, because it doesn’t have enough context to do the job well.
And here’s the snappier version, produced with a lavish application of green ink:
Unconnected, distributed systems and governance make life difficult for companies, and AI compounds the problem. Agents act in isolation, causing problems due to a lack of contextual awareness.
If we could be bothered, there are likely more improvements to be made here, but given the nature of the source text, I’d rather spend the time doing something else. Anything else, to be honest. Fin.