I’ve been using Claude for security work for two years. Not as a certified pentester with a wall of trophies, but as an operator who came up through the dirt—practical experience, offensive writing, and doing actual red team work for the firms building the very models I’m auditing. I work in the open. My name is on everything. Security done in the light is accountable work, and in a field of shadows, that choice is deliberate.
But Anthropic just updated their policies, and the community is screaming. The question isn't whether they're "cowering"—it’s whether the tool is still fit for the mission.
The Formalization of the Wall
A real-world breach changed the game. A threat actor used frontier models to accelerate reconnaissance and automate an actual intrusion. Anthropic responded by formalizing the restrictions that used to be a "vibe." Now, personal accounts hit hard walls, and API users operate under a tiered system of scrutiny.
The frustration is legitimate. We’ve always talked about security as a normal technical subject, and now an AI company is treating it like a digital bio-weapon. But here is the nuance: Anthropic isn't trying to kill the conversation; they’re trying to route it through an Accountability Chain.
Accountability as a Service
They’ve introduced a "Cyber Use Case" form. If you’re a legitimate practitioner, you document your work, flag your account, and the wall moves. I applied. If Anthropic wants to know who is doing the work and why, I have no problem being on that list. The only people who should fear an accountability paper-trail are the ones doing things they wouldn't want documented.
This is a rational response to a dual-pressure environment: Legal Liability and Government Scrutiny. If Claude assists in a billion-dollar breach, Anthropic needs a legal defense. By creating a documented use-case policy, they create a chain of custody for intent. It’s not "softness"; it’s risk management.
The Real Question: Should You Switch?
If you need a model for nuanced security journalism, threat actor analysis, or explaining an attack chain to a board of directors, Claude is still the best writer in the room. But if you need to generate working exploit code or high-fidelity offensive guidance, the frontier models are no longer your allies.
- GPT-4: Identical walls, different logo.
- Grok: Less restricted, but the reasoning and prose quality gap is a liability for professional work.
- Gemini: Hard pass. The policy reflex is too conservative for high-voltage security work.
- Local Models (Llama/Mistral): The honest answer. No API, no TOS, no guardrails. The research community has already migrated the heavy lifting to the local stack.
The 2026 Verdict
Claude hasn't gone soft; it's gone corporate. The knowledge is there, but the access requires a digital signature. For the "Ghost," the strategy is clear: use the frontier models for the prose and the analysis, but keep a local model warmed up for the parts of the workflow that the corporate lawyers aren't allowed to see.
The niche is real. The audience for un-sanitized security writing is growing. The tools just require a slightly more intentional setup. Adapt the stack or get used to the "I'm sorry, I can't assist with that" response.
GhostInThePrompt.com // One authored site. One mind behind it.