The Problem
Some days you're sharp. Twitter rage bait bounces off. Reddit wars look silly. YouTube comment sections are comedy.
Other days you're vulnerable. Someone's angry reply ruins your morning. Doom scroll turns into actual doom. Bot army rage seeps in.
Current solution: Willpower. Self-control. "Just don't read the comments."
Actual solution: Stop reading them. Make software read them for you.
The Concept
Browser extension that scans every page. Detects toxic content. Replaces it with a giant sun SVG.
Not hidden (you know something's there, curiosity kills you).
Replaced. Sun appears. Message: "Blocked: Someone's having a bad day."
Visual peace. Zero poison.
Some Days You Want the Chaos
That's the key insight.
Monday: Feeling strong. Let the bots rage. Entertainment.
Wednesday: Burnt out. Need protection. Enable the sun.
Toggle button. Toolbar icon. Click on, click off. Your choice. Your mood. Your boundary.
Not a permanent filter. A variable shield based on how resilient you feel that day.
How It Works (Technical)
Browser Extension (Manifest V3):
- Content script runs on every page automatically
- Scans all text nodes (comments, posts, replies, headlines)
- Sentiment analysis - catches toxic language
- Replaces matched content with sun SVG
- User toggle in toolbar (on/off, whitelist sites)
- Settings panel for sensitivity levels
The Hybrid Approach (Pragmatic):
Don't call an API for every comment. Expensive. Slow. Unnecessary.
Local keyword filter catches 90% of rage:
- Obvious insults ("idiot," "stupid," "stfu," "kys")
- Rage patterns (all caps, excessive punctuation, slurs)
- Bot signatures (repeated phrases, copy-paste attacks)
Fast. Free. Private. Runs in browser. No API calls.
Optional AI check for borderline cases:
- User clicks "Check this one" on false positives
- Extension sends to Claude/OpenAI API
- AI determines: toxic or just passionate?
- User trains their own threshold
Best of both: Speed + intelligence. Privacy + accuracy.
The Sun Replacement
Not just hide the toxic content. Replace it.
<div class="tox-world-block">
<svg width="120" height="120" viewBox="0 0 120 120">
<circle cx="60" cy="60" r="30" fill="#FFD700"/>
<g stroke="#FFD700" stroke-width="3">
<line x1="60" y1="10" x2="60" y2="25"/>
<line x1="60" y1="95" x2="60" y2="110"/>
<line x1="10" y1="60" x2="25" y2="60"/>
<line x1="95" y1="60" x2="110" y2="60"/>
<line x1="25" y1="25" x2="35" y2="35"/>
<line x1="85" y1="85" x2="95" y2="95"/>
<line x1="85" y1="25" x2="95" y2="35"/>
<line x1="25" y1="85" x2="35" y2="95"/>
</g>
</svg>
<p>Blocked: Someone's having a bad day</p>
</div>
Big. Friendly. Obvious. Calm.
You see the sun. You know what was there. You don't absorb the poison.
Prompting Strategy for Building It
This is a "specific fucking prompting" situation. AI can build browser extensions fast if you're exact.
Vague (AI struggles):
"Make an extension that blocks bad comments"
Exact (AI crushes):
"Manifest V3 Chrome extension. Content script scans text nodes on page load. Regex matches toxic keywords (list: idiot, stupid, stfu, kys, + all caps patterns). Replace matched DOM elements with inline SVG sun (gold #FFD700, 120px). Add toolbar toggle icon (on/off state). Settings page: whitelist domains, sensitivity slider (strict/normal/off). Use chrome.storage.sync for settings persistence."
Technical specification. Testable. AI executes perfectly.
Prompting the Sentiment Filter
Vague:
"Detect toxic language"
Exact:
"Regex patterns: (1) Insults - case-insensitive match: idiot|stupid|moron|dumb|loser. (2) Rage - all caps 10+ chars, 3+ exclamation marks. (3) Slurs - blocklist array. (4) Bot signatures - exact string repeats 3+ times. Return boolean: isToxic true/false."
AI knows exactly what to build.
Prompting the UI
Vague:
"Make it look nice"