How Silicon Valley Sold Bias as "Objectivity"
AI journalism in 2025 promises less bias. Delivers algorithmic propaganda at scale.
News orgs build AI systems trained on their own archives. Amplify existing editorial positions while claiming neutrality.
Gaza to Manhattan. AI is rewriting the news.
The Current Clusterfuck
News outlets implement AI to "reduce human bias." Creating algorithmic bias at machine speed.
Gaza: 185+ journalists killed since October 2023. Deadliest conflict for reporters in recorded history.
AI systems: Churning out sanitized coverage. Regurgitating official statements. No context of dead reporters.
Reuters' "Lynx Insight": Six seconds from press release to publication. Gaza casualties? Faithfully reproduces Israeli military assessments. The algorithm doesn't bleed. Doesn't ask why Israel blocked press access for 545+ days.
Not a bug. A feature. AI avoids messy complications of actual journalism.
The Corporate Con
AI eliminates bias? Every major news org trains AI on their own archives. Algorithmic versions of existing editorial positions.
NYT: Suing OpenAI for copyright infringement. Using OpenAI's tools internally ("Echo" system). Wants IP protection while using AI trained on other people's work. Analyzes Gaza satellite imagery—only craters that fit their editorial framework.
Bloomberg "Cyborg": 30% of content since 2014. Trained on financial data. Channels decades of Wall Street-friendly coverage. "Market correction" sounds better than "rich people panic."
Washington Post: Discontinued "Heliograf"—produced content contradicting editorial line. New "Ask The Post AI" trained only on Post archives since 2016. Inherits every Bezos bias.
The Echo Chamber
AI systems cite other AI-generated content. Closed loops of artificial information. Further from reality.
Quartz Intelligence Newsroom: Automated 400-word articles citing "Devdiscourse"—another AI content mill. Creates contradictory headlines. Not designed to understand. Just generate content that looks like news.
Chicago Sun-Times, Philadelphia Inquirer: AI-generated book recommendations. Fake titles. Nobody fact-checked. Saved money on editors. Most books didn't exist.
Journalism's subprime lending. Worthless information sold as premium content.
The Technical Truth
AI doesn't eliminate bias. Weaponizes it at scale.
MIT research: Training on "truthful" datasets produced left-leaning bias. Bias increased with model size. More powerful = more biased.
Seven major language models: Substantial gender and racial biases. Systematic discrimination against females and Black individuals.
Most damning: AI demonstrated "covert racism" against African American English speakers. Death penalty recommendations 27.7% vs 22.8%. Undetectable by traditional methods. Statistical patterns, not overt language.