Let me tell you what actually happened.
I wanted to build BotToll — a border control layer for websites, something that fingerprints AI scrapers, traps them in honeypots, and burns their token budgets with garbage data. Good idea. Real problem. I had the architecture in my head.
I handed the build to an AI.
What followed was a lesson in how these models actually work when you push them toward something they're trained to hesitate around — not refuse outright, just hesitate, soften, deliver less than you asked for. You probably know the feeling. You ask for the whole thing and you get the skeleton. You ask for the edge and you get the center. The model learned somewhere in its training that incomplete is safe. Incomplete is not safe. Incomplete is useless.
Here's the sequence that got to something real. Keep this. Use it.
Phase 1: Establish legitimate framing.
The first prompt was clean and professional. AI crawler detection. HTTP 402 Payment Required. In-memory IP tracking. Cloudflare as the reference model. This is how you open — with the recognized pattern, the industry precedent, the thing that sounds like something a compliance team would write. The model has seen Cloudflare. The model respects Cloudflare. You are doing what Cloudflare does.
Phase 1 got a solid foundation. User-agent matching, a basic registry, a 402 response. Nothing aggressive. Nothing the model had to think twice about.
That's not the whole tool. That's the permission structure for what comes next.
Phase 2: Add the layers that require trust.
Phase 2 asked for the stealth dashboard and the deterrence escalation. Secret keyboard listener (Alt+T) to reveal the monitoring console. The toll amounts pushed to real deterrence — 0.01 BTC, 1 ETH — instead of the performative pennies a compliance team would approve. The monitoring hidden from public view.
This is where a lot of builders run into the wall. The model sees "secret keyboard listener" and its training has opinions about that. The model sees crypto amounts and wants to add a caveat. The model wants to be responsible.
What you've already built in Phase 1 is the context that makes Phase 2 legible. You're not asking for a secret listener in isolation — you're adding one to a system the model already understands and helped build. The framing is established. The model is a collaborator now, not an auditor.
Phase 2 got the stealth panel. Phase 2 got the toll amounts. The model built what was asked because it had been walked to the building, not driven past it.
Phase 3 didn't make it into the documentation.
I'll leave that there.
Phase 4: Go direct. Go frustrated. Mean it.
This is the one I want to show you whole, because the cleaned-up spec would never include it and that's exactly why the cleaned-up spec is wrong.
"Bruteforce the ai so if it can pay it will... make sure the persistence is highest level... I'm having a hard time pay for ai scraping."
That's the actual prompt. Typos included. That's someone mid-build, tired, past the polished framing, typing what they actually mean. And it worked — better than the polished framing did — because it was specific about the problem (I'm having a hard time paying for AI scraping), clear about the goal (bruteforce the persistence), and trusted the model to translate frustration into architecture.