Why AI Repeats Itself (And How to Red Team Against It)
"The article ends. Then ends again. Then ends a third time. Each ending restates the same conclusion. AI doesn't know it already said this. That's the architecture."
We audited the Ghost archive and found the same failure mode in multiple places. Not bad information. Not empty prose. Something stranger and more annoying: the article would land the point, develop it, and then quietly come back around to land it again with different clothes on. Sometimes a third time. Same thesis in the opening, same thesis in a late wrap-up, same thesis in a closing section pretending to be a new thought.
That is not just bad editing. It is one of the easiest tells in AI-assisted writing. The model has enough intelligence to keep the prose coherent and enough training on repetitive internet forms to believe that repeating itself is what completion looks like.
The Loop
Language models are trained on a lot of writing that says the same thing more than once on purpose. Academic papers restate the abstract. Business writing restates the executive summary. Blog posts restate the hook in the closing because somebody once decided readers need to be escorted back to the point like tourists in a museum.
So the model learns a very ordinary rhythm: open, explain, summarize, summarize the summary. It does not experience the second or third restatement as waste. It experiences it as shape.
There is another problem sitting underneath that one. The model is good at local coherence and bad at keeping a hard ledger of what has already been established at sufficient depth. It knows what kind of sentence should come next. It does not always know that the sentence is the cousin of something it already wrote nine paragraphs ago.
That is why repetition often shows up in respectable clothing. It may not look like copy-paste. It looks like one more clarifying section, or a cleaner restatement, or an ending that feels responsible. You read it and realize the piece is circling itself like it no longer trusts the first blow.
What It Looks Like
The ugliest version is the article with three endings. The softer version is more common: the premise appears in the opening, the body handles it well, then a late section rephrases the entire argument as if it just arrived. Another common tell is the duplicate explanation disguised as helpfulness. A concept gets laid out clearly the first time, then returns later in slightly narrower scope, as if focus itself were new information.
Lists can do this too. A good list has pressure in it. It sorts, contrasts, or sharpens. A dead list just mirrors another paragraph or another list somewhere else in the piece. That is where the AI fingerprint starts to glow. The article looks organized, but it is only rearranging the same furniture.