Why AI Repeats Itself (And How to Red Team Against It)

Why AI Repeats Itself (And How to Red Team Against It)

"The article ends. Then ends again. Then ends a third time. Each ending restates the same conclusion. AI doesn't know it already said this. That's the architecture."

We audited the Ghost archive and found the same failure mode in multiple places. Not bad information. Not empty prose. Something stranger and more annoying: the article would land the point, develop it, and then quietly come back around to land it again with different clothes on. Sometimes a third time. Same thesis in the opening, same thesis in a late wrap-up, same thesis in a closing section pretending to be a new thought.

That is not just bad editing. It is one of the easiest tells in AI-assisted writing. The model has enough intelligence to keep the prose coherent and enough training on repetitive internet forms to believe that repeating itself is what completion looks like.

The Loop

Language models are trained on a lot of writing that says the same thing more than once on purpose. Academic papers restate the abstract. Business writing restates the executive summary. Blog posts restate the hook in the closing because somebody once decided readers need to be escorted back to the point like tourists in a museum.

So the model learns a very ordinary rhythm: open, explain, summarize, summarize the summary. It does not experience the second or third restatement as waste. It experiences it as shape.

There is another problem sitting underneath that one. The model is good at local coherence and bad at keeping a hard ledger of what has already been established at sufficient depth. It knows what kind of sentence should come next. It does not always know that the sentence is the cousin of something it already wrote nine paragraphs ago.

That is why repetition often shows up in respectable clothing. It may not look like copy-paste. It looks like one more clarifying section, or a cleaner restatement, or an ending that feels responsible. You read it and realize the piece is circling itself like it no longer trusts the first blow.

What It Looks Like

The ugliest version is the article with three endings. The softer version is more common: the premise appears in the opening, the body handles it well, then a late section rephrases the entire argument as if it just arrived. Another common tell is the duplicate explanation disguised as helpfulness. A concept gets laid out clearly the first time, then returns later in slightly narrower scope, as if focus itself were new information.

Lists can do this too. A good list has pressure in it. It sorts, contrasts, or sharpens. A dead list just mirrors another paragraph or another list somewhere else in the piece. That is where the AI fingerprint starts to glow. The article looks organized, but it is only rearranging the same furniture.

HACK LOVE BETRAY
OUT NOW

HACK LOVE BETRAY

The ultimate cyberpunk heist adventure. Build your crew, plan the impossible, and survive in a world where trust is the rarest currency.

PLAY NOW

Red Team The Rhythm

The fix is less glamorous than people want. You do not outsmart the repetition with one magic prompt. You corner it.

Tell the model each concept gets one full explanation. Tell it the ending must add a new angle instead of restating the opening. Give sections distinct jobs. If the piece is technical, define what belongs where so the body does not keep smuggling the same explanation back in under a new heading.

Then do the boring part. Audit the draft after it exists.

Ask what each section is doing. Summarize each one in a sentence. If two sections collapse into the same sentence, one of them is lying about being necessary. If the ending could be swapped with the introduction and the article would barely notice, cut it and move on.

Compression helps too. Word budgets force decisions. A model with eight hundred words has less room to pace around the same idea pretending it discovered a second door.

What Changed On Ghost

The useful lesson from the archive was not that AI writes badly. A lot of the affected pieces were sharp, informed, and technically correct. The problem was rhythm. The article would explain, then reassure, then reassure the reassurance. That is not a knowledge failure. It is a structural habit.

Once you see it, the edits are straightforward. Kill the second ending. Merge the duplicate lists. Let one explanation stand. Make the close do something the opening did not. The article gets shorter and the intelligence suddenly feels more real, not less.

That is the part people miss when they talk about AI writing like a moral problem. Most of the time the machine is not failing because it lacks words. It is failing because it learned too much from forms that confuse repetition with completion.

If you are using AI seriously, this is part of the job. Not because the model is broken, but because fluency can hide redundancy for longer than it should. Red-team the rhythm. Make every section earn its place. When the point lands once, let it stay landed.