Anthropic’s Claude AI Subscription Changes: OpenClaw Ban, Usage Limits, and Financials

Clip title: Anthropic banned OpenClaw… Author / channel: Matthew Berman URL: https://www.youtube.com/watch?v=a4hdPWSUzsE

Summary

This video discusses Anthropic’s recent policy changes regarding Claude AI subscriptions, particularly impacting third-party tools like OpenClaw. As of April 4th, 2026, Claude subscriptions will no longer cover usage through these external harnesses. Users are now required to “turn on extra usage” (implying additional costs) or cancel their subscriptions for a full refund before April 9th. The speaker notes that this change was announced with less than 24 hours’ notice, causing significant disruption and confusion among users.

The core reason behind Anthropic’s decision appears to be financial and related to compute capacity. An analysis by Cursor estimated that a 2,000 in compute, indicating substantial subsidization by Anthropic. To manage this growing demand and resource strain, Anthropic has employed a “carrot and stick” approach. The “carrot” involved doubling usage outside peak hours and on weekends for two weeks. The “stick” was adjusting the 5-hour session limits for all subscription tiers during peak weekdays (5-11 am PT / 1-7 pm GMT), causing users to exhaust their quotas much faster. The speaker also points out Anthropic’s reported uptime of 98.77% for claude.ai, which is considered “effectively unusable” for critical applications.

Despite these user-facing challenges, Anthropic is experiencing explosive growth, boasting a revenue run rate of $30 billion, surpassing OpenAI’s publicly known figures. This rapid expansion and high demand for their models are likely the underlying drivers for their policy changes and capacity management issues. The presenter emphasizes that for users, there is “literally zero switching cost” to move from Claude to other frontier models, especially with tools like OpenClaw which can easily swap backend models (e.g., to GPT-5.4). He advises users to optimize prompts for specific models to get the best performance.

In conclusion, the video portrays Anthropic’s current strategy as frustratingly unclear and unstable for its user base due to frequent policy changes and capacity limitations. The speaker suggests that relying solely on one frontier model is risky, advocating for a “multimodal strategy” that incorporates a variety of models, including open-source and local options, for different agentic tasks. This approach offers more reliability and flexibility, especially when companies like Anthropic are seemingly prioritizing aggressive growth and reaching AGI over consistent user experience with third-party tools.