Signing up for Claude is not an act of solidarity
... regarding the terror group Anthropic. you do not, under any circumstances, "gotta hand it to them"

A lot of people have been posting about signing up for Claude accounts because Anthropic “stood up” to the Pentagon. This includes our elected officials, even one of my Senators, Andy Kim, who, in general, is usually better than this.
I just signed up for Claude as I respect Anthropic for putting responsible use above profit. Hegseth refuses to agree that AI shouldn’t be used for mass surveillance of Americans and for creating robots authorized to kill without a human decision-maker. This is an incredibly important debate and I urge everyone, including other AI/tech companies, to call for responsible use.
Anthropic and Amodei are not heroes.
They created the technology that would allow for mass surveillance and the automation of killing machines.
They entered a contract with the Pentagon, who shows a distinct interest in both of those uses.
They pretended their terms of service would keep it from being used that way.
And the Pentagon used their tools anyway, they accomplished nothing, prevented nothing.
They are not the heroes.
They are one of the many AI villains.
Ed Zitron, has been a leading voice on all of the problems with AI, not the least of which are it’s incredibly expensive to operate, and very few people want to pay for it’s outcomes, or can afford to do so at the “real” cost. On the question of who is good in the Pentagon vs Anthropic spat, Zitron notes:
In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools, kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray.
Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.
The AI Bubble Is An Information War [Fixed]
Editor's Note: Apologies if you received this email twice - we had an issue with our mail server that meant it was hitting spam in many cases! Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year,
Ed is right. This is something WarGames figured out back in 1983

All the AI companies stole people’s creative outputs.
All the AI companies use poorly paid contractors in the Global South to identify all of the terrible things on the Internet by forcing them to consume that content day after day. One of them, xAI’s Grok uses that data to boost the terrible, including launching a CSAM machine as a feature, the rest to suppress the terrible.
All the AI companies are undoing advancements in green energy, causing coal and natural gas power plants to stay online, or even building new carbon burning capacity.
All the AI companies are building suicide coaches, overdose coaches, fake porn generators, CSAM factories, scam factories, because they think they can profit off those things.
They are all producing slop faster than anyone can consume or identify it. None of this is good. None of this is healthy. There are no heroes here, except the people who are refusing AI mandates pushed on them by companies, apps, and managers.
Delete your Claude Account. Or actually better yet, keep your free account and regularly max out your quota. Lets burn this shit down as fast as humanly possible.
Add a comment: