Below you will find pages that utilize the taxonomy term “Anthropic”
Anthropic: A Technical and Business Model Analysis
By SteveLo — Developer of ccusage_go March 25, 2026
This analysis examines Anthropic’s technical capabilities, business model, and pricing architecture based on publicly available data, API documentation, academic research, GitHub issues, and first-party token usage analysis using ccusage_go.
All claims are sourced. All numbers are verifiable.
1. Core Technical Capability Assessment
What Anthropic Builds In-House
Anthropic develops a family of large language models (LLMs) called Claude. These models process and generate text. This is Anthropic’s sole proprietary technology.
10 Tips to Stop Claude Code From Burning Your Money and Ignoring Your Instructions
By SteveLo — Developer of ccusage_go
Everyone’s raving about how Claude Code + Codex CLI is the dream combo. “Use Claude to build, Codex to review!” “Two AIs are better than one!”
I hate to break it to you — but the reason this works has nothing to do with Codex being smarter. It’s because Codex gives you a fresh context with zero cache.
That’s right. Your Claude Code isn’t getting dumber because the model is bad. It’s getting dumber because the cache architecture is slowly poisoning it. And Codex “fixes” it simply by not having that problem.
The Cache Trap: How Claude Code's Architecture Costs You 30x More While Making the Model Worse
By SteveLo — Developer of ccusage_go March 25, 2026
I’ve been a Claude Code Max 20x subscriber ($200/month). I’m also the developer of ccusage_go, an open-source Go tool that parses Claude Code’s local JSONL session logs to calculate real token usage and costs.
What I found made me cancel my subscription.
TL;DR
- Claude Code charges you for Cache Read and Cache Create tokens that make up 97.7% of your bill — the actual productive work (API Cost) is only 2.3%.
- One single session cost $55.94. My entire monthly subscription is $200. Three sessions and I’m done.
- The cache architecture doesn’t just cost more — it actively degrades the model’s ability to follow your instructions, creating a vicious cycle where you retry (and pay more) because the model stops listening.
- Boris Cherny (@boris_cherny), Head of Claude Code, promotes workflows on an internal Anthropic account with no quota limits. Paying users who follow his advice get burned.
- Anthropic has never publicly disclosed how cache tokens count against Max subscription quotas.
Part 1: The Numbers Don’t Lie
One Session, Exposed
Using ccusage_go’s new CC Cost / CR Cost / API Cost breakdown on a single session (topamo-blacklist-feature):
Anthropic’s ‘Department of War’ Statement: Moral Branding vs. Security Reality
Primary source: Anthropic News: Statement on Department of War
Anthropic’s statement positions the company as a partner to democratic governments while drawing hard lines on specific uses (domestic mass surveillance and fully autonomous weapons). As policy messaging, it is highly polished: it presents national-security cooperation, ethical constraints, and institutional responsibility in one coherent narrative.
The tension appears when that narrative is compared with events from the past year. Claude has repeatedly been framed as a safety-focused model, yet multiple public incidents suggest recurring misuse and extraction pressure. In that context, the statement can be read not only as an ethical position, but also as narrative repositioning: shifting attention from “our model has been abused” to “we are principled gatekeepers.”