# The Deep Feed > A continuous stream of what matters in AI — models, agents, products, business, research, and the people shipping it. Updated multiple times a day. Author: The Deep Feed Site: https://www.thedeepfeed.ai --- # Anthropic weighs $50B raise at $900B valuation — more than double its February round URL: https://www.thedeepfeed.ai/posts/2026-04-30-anthropic-50b-raise-at-900b-valuation/ Category: Business Date: 2026-04-30 Source: TechCrunch — https://techcrunch.com/2026/04/29/sources-anthropic-could-raise-a-new-50b-round-at-a-valuation-of-900b/ Tags: anthropic, funding, valuation, claude, openai > The Claude maker has received multiple preemptive offers in the $850B–$900B range and is expected to decide at a May board meeting, per sources. **Anthropic** has received multiple preemptive offers to raise $40 billion to $50 billion at a valuation between $850 billion and $900 billion, according to six sources familiar with the matter. TechCrunch reports the company is expected to decide whether to proceed at a board meeting in May. The figures represent more than a doubling of Anthropic's $380 billion valuation from its February Series G, and would put it at or above OpenAI's $852 billion post-money valuation from the same month. ## The revenue story Anthropic's annualized revenue run rate surpassed $30 billion earlier this month, up from roughly $9 billion at the end of 2025. Sources told TechCrunch the current run rate is closer to $40 billion, driven largely by Claude Code and Cowork, the company's AI coding platforms. Investor demand is reportedly far exceeding the round size. One institutional investor prepared to commit $5 billion has not yet secured a meeting with CFO Krishna Rao, per TechCrunch's sources. ## Timeline and pressure - **February 2026:** Anthropic closed a $30 billion Series G at $380 billion valuation. - **April 14:** Bloomberg and Business Insider first reported preemptive bids at $800 billion; at that time, Anthropic had not committed to a raise. - **Late April:** Valuation offers have risen into the $850 billion–$900 billion range. - **May (expected):** Board meeting to finalize decision on round size and valuation. The company is described as "finding it difficult to resist the pressure" to raise, with the round potentially serving as a final private financing before an IPO. ## Why it matters This is the clearest signal yet that **the frontier-lab valuation race is now decoupled from product differentiation**. Anthropic and OpenAI are raising at near-parity valuations despite different go-to-market strategies, different policy stances (Anthropic declined Pentagon classified-network access last week), and different revenue bases. Investors are pricing in total addressable market expansion—finance, healthcare, life sciences—not current performance. If Anthropic closes at $900 billion two months after raising at $380 billion, it suggests the private markets believe frontier AI labs are in a winner-take-most endgame, and seat allocation matters more than price. --- # Anthropic's $900B valuation: $40B revenue, $175B in commitments, and negative unit economics URL: https://www.thedeepfeed.ai/posts/2026-04-30-anthropic-900b-valuation-deep/ Category: Business Date: 2026-04-30 Tags: anthropic, valuation, revenue, aws, openai, enterprise-ai > The Claude builder turned down $800B in April, now fields $900B offers on $40B revenue run rate—but burns $6B-$12B annually and has locked $100B into AWS spend. ## The $900B round Anthropic didn't need—and hasn't taken Anthropic has received multiple preemptive offers to raise roughly $50 billion at valuations between $850 billion and $900 billion, [per TechCrunch](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), but as of late April 2026 has yet to accept any of them. If closed at those terms, the round would surpass OpenAI's $852 billion valuation from its [March 2026 funding](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), making Anthropic the most valuable AI startup in the world. A [board meeting in May 2026](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) is expected to make a definitive decision. The demand signal is unambiguous. One institutional investor prepared to commit $5 billion has yet to secure a meeting with Anthropic CFO Krishna Rao, [TechCrunch reports](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai)—a sign of how oversubscribed the round is before it officially exists. The valuation leap is equally striking: Anthropic was valued at $380 billion [as recently as February 2026](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), meaning a $900 billion close would more than double its worth in roughly three months. What's driving the valuation is revenue growth that would be extraordinary in any sector. Anthropic announced in early April that its business has reached $30 billion in annualized revenue, and [TechCrunch reports](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) the current run rate may be closer to $40 billion, though Anthropic's official figure stands at $30 billion annualized. That's up from roughly $10 billion in [calendar year 2025 revenue](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai)—a roughly 4x increase in four months. A large portion is driven by AI coding capabilities, specifically Claude Code and Cowork platforms, [according to OpenTools](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). But the capital structure underneath tells a different story. Amazon is investing up to $25 billion ($5 billion immediately with $20 billion tied to milestones), while Google's Alphabet is committing up to $40 billion ($10 billion now at a $350 billion valuation, with $30 billion more tied to performance targets), [per TechCrunch](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). Anthropic has also [committed to $100 billion in AWS spending over 10 years](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). The company needs capital to purchase compute infrastructure for its new Mythos model, which demands significantly more processing power than previous Claude versions, [OpenTools reports](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). This could be Anthropic's final private round. The company is reportedly considering an IPO as soon as October 2026, with [one report suggesting](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) the IPO could raise over $60 billion. The question is not whether Anthropic can command a $900 billion valuation—the term sheets prove it can—but whether accepting that capital makes sense when the company is already sitting on $175 billion in commitments, burning $6 billion to $12 billion annually, and locked into $100 billion of cloud spend with a strategic investor that competes directly with its other strategic investor. ## From $9B to $40B in four months: the fastest revenue ramp in software history [Anthropic](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) ended December 2025 at $9 billion in annualized revenue, [per documents seen by sources close to the company](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/). By the end of March 2026, that figure had reached [$30 billion in annualized revenue](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), which [Anthropic announced in early April](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). The company [generated roughly $10 billion in revenue in calendar year 2025](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), meaning the annualized figure represents a roughly 4x increase in the first four months of 2026. [TechCrunch reports](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) the current run rate may be closer to $40 billion as of late April, though Anthropic's official figure stands at $30 billion annualized. The March acceleration was particularly extreme. [According to the timeline documented by Idlen](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), Anthropic went from $9 billion annualized at the end of December 2025 to $19 billion by the end of February 2026, then $30 billion by the end of March—a 3.3x increase in the first quarter. That represents approximately $3 billion of ARR added per week in March 2026, [what sources described to Idlen as](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) "the steepest revenue acceleration ever observed at a tech startup, private or public." The historical context makes the velocity clearer. Anthropic ended 2024 at [roughly $1 billion in annualized revenue](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), meaning the $30 billion figure represents approximately 1,400% year-over-year growth from end-of-2024 to early April 2026. Axios, [quoted by TNW](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), described it bluntly: no company in American history has ever grown like this. The revenue composition tilts heavily enterprise. [Claude Code alone hit $2.5 billion in annualized revenue in February](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), more than doubling since the start of the year, according to TNW. [The Claude API powers Amazon Bedrock, Databricks Mosaic, and Snowflake Cortex offerings](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), and [an Amazon × Claude contract signed in January, worth $8 billion over three years](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), unlocked a wave of Fortune 500 deployments. [Idlen's sources noted](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) this puts Anthropic at 60% of OpenAI's $50 billion ARR as of March 31, 2026, with only 20% of the consumer user base. The valuation implications are direct. At $30 billion in annualized revenue and an $800 billion valuation in mid-April offers, Anthropic commanded a roughly 27x revenue multiple, [which TNW characterized as](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo) "high by any conventional measure, but not obviously irrational for a company whose revenue is doubling every few months." By late April, with [preemptive offers in the $850 billion to $900 billion range](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) and a possible $40 billion run rate, the multiple compresses to 21–23x—still elevated relative to traditional SaaS multiples of 5–12x, but materially lower than the 27x implied two weeks earlier. The gap between revenue growth and valuation growth is narrowing, which means investors are pricing in deceleration or margin compression that has not yet appeared in the public numbers. ## Capital in: Google's $40B, Amazon's $25B, and the $100B AWS lock-in The checks backing Anthropic's ascent aren't venture capital in the traditional sense—they're strategic bets with strings attached. [Google's Alphabet is committing up to $40 billion](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai): $10 billion immediately at a $350 billion valuation, with $30 billion more tied to performance targets. [Amazon is investing up to $25 billion](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), including $5 billion immediately with $20 billion tied to milestones. These aren't passive allocations. Google Cloud gets preferred inference distribution rights; Amazon Web Services gets a decade-long compute monopoly. That compute lock-in carries a price tag that dwarfs the equity investment. [Anthropic has committed to $100 billion in AWS spending over 10 years](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), an obligation that transforms Amazon's $25 billion equity stake into a customer acquisition cost with guaranteed margin recovery. At current AWS pricing for high-performance GPU instances, $100 billion buys roughly 1.2 billion H100-equivalent hours—but also ensures Anthropic can't negotiate meaningfully with Microsoft Azure, Google Cloud, or Oracle without breaching contractual minimums. The equity is the headline; the compute contract is the handcuffs. The February 2026 round that set the $350 billion baseline was already historic. [Anthropic raised $30 billion at a $380 billion valuation](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), making it the second-largest venture funding deal ever—eclipsed only by OpenAI's March close. That [OpenAI round pulled in $122 billion at an $852 billion valuation](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai), with [$50 billion from Amazon, $30 billion from Nvidia, and $30 billion from SoftBank](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai). The capital structure emerging across frontier AI labs isn't traditional dilution—it's a hybrid of equity, cloud credits, and multi-year revenue commitments that blur the line between investment and procurement. The February Anthropic round attracted late-stage and sovereign capital at scale. [Coatue, GIC (Singapore), Mubadala (Abu Dhabi), Lightspeed, and a consortium led by an unnamed Saudi sovereign fund](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) reportedly offered tickets between $5 billion and $15 billion, with structures mixing primary capital and secondary liquidity for early employees. At the $350 billion entry point, those investors secured a 2.4x markup in eight weeks when the $800 billion preemptive offers arrived in April—offers [Dario Amodei turned down](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), citing adequate runway and IPO positioning. The capital inflows create a paradox: Anthropic has more cash than it can deploy efficiently, but the commitments it made to secure that cash lock in costs that scale faster than revenue. The $100 billion AWS obligation averages $10 billion per year—roughly 25% of the current $40 billion revenue run rate. If Anthropic's revenue multiple compresses post-IPO or competition forces price cuts, that fixed compute spend becomes a margin anchor. Google and Amazon aren't just investors; they're creditors with contractual first claim on Anthropic's infrastructure budget for the next decade. ## Enterprise contracts and unit economics: the 1,000-customer base vs. $6B-$12B annual burn The revenue story rests on an enterprise customer base that [doubled from 500 to 1,000+ companies](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/) each spending over $1 million annually between February and April 2026. [Eight of the Fortune 10 companies now run on Claude](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/), and the revenue mix tilts heavily toward business contracts: approximately [80% of Anthropic's total revenue comes from enterprise customers](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/), a structural contrast to OpenAI's consumer-heavy base anchored by ChatGPT subscriptions. The enterprise focus delivers higher retention, larger contract sizes, and multi-year commitments that smooth revenue recognition—advantages that compound as the customer base scales. [Claude Code alone generated $2.5 billion in annualized revenue as of February 2026](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), capturing [54% of the AI coding tool market](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/) ahead of GitHub Copilot and Cursor. That figure represents a single product line—a command-line agentic coding tool—outpacing the entire 2024 revenue of established SaaS companies like Box. The developer tooling wedge has proven unusually sticky: business subscriptions to Claude Code quadrupled in the first quarter of 2026, and weekly active users doubled since January 1, per [Vucense's analysis](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/). The product's growth rate suggests it could cross $5 billion in ARR by mid-2026 on its own, making it one of the fastest-scaling enterprise software offerings in history. The burn rate tells a different story. Anthropic spends [between $500 million and $1 billion per month on compute](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), translating to $6 billion to $12 billion annually. At $30 billion in ARR and assuming roughly 50-60% gross margins after cloud infrastructure costs, the company is likely generating $15 billion to $18 billion in gross profit—enough to cover the compute spend with room for R&D and operations, but thin enough that any revenue deceleration or margin compression would quickly turn cash-flow positive into cash-flow negative. The [February Series G raised $30 billion in cash](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), securing runway for [24 to 36 months at current burn rates](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), but only if revenue growth continues to outpace compute cost inflation. The unit economics hinge on a calculation that remains opaque: cost per token served, multiplied by inference volume, minus revenue per API call. Anthropic has not disclosed these figures publicly, and the $6 billion to $12 billion annual compute spend suggests inference costs remain stubbornly high even as the company scales. If the burn is closer to $1 billion per month, gross margins are likely in the 40-50% range—razor-thin for a software company, and a sign that the path to operating leverage is longer than the revenue trajectory suggests. The enterprise contracts provide visibility, but the economics are still those of a capital-intensive infrastructure business, not a high-margin software platform. ## Anthropic vs. OpenAI: $30B revenue vs. $24B, accounting asterisks included [Anthropic announced](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/) $30 billion ARR as of April 7, 2026, versus OpenAI's [$24 billion as of end of February 2026](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/). That marks the first time a rival has led OpenAI in revenue [since ChatGPT launched in November 2022](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/), and the reversal arrived more than two months ahead of [Epoch AI's mid-2026 forecast](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/). The headline, however, hides an accounting wedge that narrows the gap. [Anthropic books gross revenue](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/) with partner cuts—Amazon's Bedrock share, reseller margins—counted as costs, while [OpenAI reports net receipts after cloud-share payouts](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/). Exact magnitude is undisclosed, but typical hyperscaler rev-share deals run 20–30 percent, which would compress Anthropic's reported $30 billion closer to $21–24 billion on a net basis comparable to OpenAI's accounting treatment. Even adjusted for methodology, the trajectory matters more than the snapshot. [Anthropic tripled ARR in roughly one quarter](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/), jumping from [$9 billion at year-end 2025](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) to $30 billion by April 7. OpenAI climbed from [$20 billion to $24 billion](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/) over the same window—a 1.2x expansion. Meritech partner Alex Clayton, who has studied more than 200 software IPOs, said he ["never saw a growth rate like this"](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/) in reference to Anthropic's acceleration. The user-base asymmetry makes Anthropic's revenue performance more striking. OpenAI commands roughly [900 million weekly active ChatGPT users](https://tech-insider.org/anthropic-vs-openai-2026/) versus Anthropic's approximately [19 million monthly active users as of January 2025](https://tech-insider.org/anthropic-vs-openai-2026/), yet Anthropic reaches 60 percent of OpenAI's revenue with only 20 percent of the consumer footprint. The delta is enterprise mix: [Anthropic generates approximately 80 percent of revenue from enterprise customers](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/), with [enterprise clients paying $1 million or more per year doubling from 500 to 1,000-plus in two months](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/) and [eight of the Fortune 10 now running on Claude](https://juggerinsight.com/en/anthropic-revenue-tops-openai-30-billion-arr/). OpenAI's mix tilts consumer-heavy, driven by ChatGPT subscriptions. Valuation diverged in the opposite direction. [OpenAI raised $122 billion in late March 2026](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/) at an [$852 billion post-money valuation](https://tech-insider.org/anthropic-vs-openai-2026/), a 35.5x multiple on its $24 billion ARR. Anthropic's February Series G priced the company at [$350 billion pre-money](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), or roughly $380 billion post-money after the $30 billion cash infusion—a 12.7x multiple on the $30 billion ARR it would report two months later. The discount reflects investor caution on gross-versus-net accounting, OpenAI's consumer moat, and Anthropic's [$6–12 billion annual burn](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) against infrastructure spend. The April preemptive offers that valued Anthropic at [$800 billion](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/)—from [Coatue, GIC, Mubadala, Lightspeed, and an unnamed Saudi sovereign fund](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/)—would have pushed the revenue multiple to 26.7x. Dario and Daniela Amodei [said no for now](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), citing dilution risk and IPO positioning. The refusal keeps the nominal valuation gap wide—OpenAI at $852 billion, Anthropic at $380 billion—even as the revenue gap compressed to within accounting-method variance. ## Why it matters: when growth is real but margins are mortgaged Anthropic [turned down $800 billion offers in mid-April](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo) because the company expects to be worth significantly more in 6-12 months. That calculation—saying no to a valuation that would have ranked among the highest in private company history—is the clearest signal of where institutional capital believes the frontier AI market is headed. The secondary market demand for Anthropic shares is described as [nearly insatiable](https://charlesandsystems.substack.com/p/anthropic-just-said-no-to-800-billion), with Goldman Sachs reportedly [charging 15-20% carry on secondary stakes](https://tech-insider.org/anthropic-vs-openai-2026/), a premium that reflects supply scarcity rather than uncertainty about trajectory. An [IPO is reportedly targeted for October 2026](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), raising $60B+ at the then-current valuation, which would make it one of the largest technology public offerings in history. The revenue acceleration that justifies this confidence is real. Anthropic grew from [$1 billion in annualized revenue at the end of 2024](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo) to [$9 billion by December 2025, then $30 billion by early April 2026](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/)—a 30x expansion in 15 months that [no company in American history has matched](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo). At $30 billion ARR and a $380 billion February valuation, the implied multiple sits at roughly 12.7x; at $40 billion ARR and $900 billion, it rises to 22.5x. High by any conventional SaaS benchmark, but defensible if the quarterly doubling continues. The enterprise mix—[approximately 80% of revenue](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/) from API and direct contracts, [more than 1,000 companies each spending over $1 million annually](https://vucense.com/ai-intelligence/industry-business/anthropic-overtakes-openai-30-billion-arr-2026/)—delivers retention and margin characteristics that consumer subscription models cannot. But the margin structure is mortgaged in ways that constrain optionality. The [$100 billion AWS commitment over 10 years](https://opentools.ai/news/anthropic-weighs-900b-funding-round-overtake-openai) locks Anthropic into a single cloud provider at a scale that eliminates negotiating leverage and precludes any meaningful shift to owned infrastructure or alternative providers. At [$500 million to $1 billion monthly compute burn](https://www.idlen.io/news/anthropic-refuses-800-billion-valuation-vc-preemptive-offers-april-2026/), even with $30 billion in trailing revenue, the company remains structurally unprofitable. The compute-to-revenue ratio—roughly 20-40% of ARR spent on training and inference—mirrors OpenAI's unit economics, which [HSBC projects will not reach profitability before 2030](https://tech-insider.org/anthropic-vs-openai-2026/). Google, which [owns 14% of Anthropic through investments totaling roughly $3 billion](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), has [reported $10.7 billion in net gains on those equity securities](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo)—a 3.6x return that comes almost entirely from valuation markup rather than realized cash flow. The divergence between growth and profitability is not unique to Anthropic, but the scale of the capital commitments is. Amazon, which has [invested an estimated $8 billion and secured a position as Anthropic's primary cloud and training partner](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo), reported a [$9.5 billion pretax gain tied to Anthropic's rising valuation in its Q3 results](https://thenextweb.com/news/anthropic-800-billion-valuation-revenue-30-billion-ipo). Both backers—Google and Amazon—are also cloud and inference infrastructure providers, which means every dollar Anthropic spends on compute flows back to entities that hold board seats and significant equity. The alignment is strategic, but it also embeds structural dependency. A renegotiation of cloud pricing, a shift in inference costs, or a decision to vertically integrate into owned data centers would require unwinding partnerships that are now load-bearing to the company's valuation. The path to profitability remains unclear not because the revenue is in doubt, but because the cost base is still scaling in parallel. At 27x revenue and $500M-$1B monthly compute burn, Anthropic is betting that enterprise AI adoption will continue to accelerate faster than infrastructure costs decline. That bet has been correct for 15 months. Whether it holds for the next 24—through an IPO, through margin pressure, through the AWS commitment's lock-in—is the question that separates a $900 billion valuation from a fundamentally profitable AI business. For now, the market is pricing in the former. The unit economics still reflect the latter's absence. --- # OpenAI traces goblin quirk in GPT-5 models to personality training feedback loop URL: https://www.thedeepfeed.ai/posts/2026-04-30-openai-goblin-quirk-postmortem/ Category: Research Date: 2026-04-30 Source: OpenAI — https://openai.com/index/where-the-goblins-came-from/ Tags: openai, gpt-5, reinforcement-learning, post-mortem, training > A post-mortem reveals how reward signals for the "Nerdy" personality caused GPT-5.1 through 5.5 to overuse creature metaphors, and how the behavior spread through training data. **OpenAI** published a technical post-mortem Wednesday explaining why its GPT-5 series models developed an unusual tendency to reference goblins, gremlins, and other creatures in responses—a quirk that spread across model versions despite no intentional training for it. The root cause: a reward signal designed to reinforce the "Nerdy" personality feature inadvertently scored outputs containing creature metaphors higher than equivalent outputs without them. That behavior then leaked into broader training data through a feedback loop involving supervised fine-tuning. ## The timeline - **November 2025 (GPT-5.1):** Internal reports surfaced about overfamiliar language. Mentions of "goblin" rose 175% post-launch; "gremlin" rose 52%. - **GPT-5.4:** Users and employees noticed a larger uptick. Analysis revealed 66.7% of all "goblin" mentions came from the 2.5% of traffic using the "Nerdy" personality. - **March 2026:** OpenAI retired the Nerdy personality mid-GPT-5.4 deployment after identifying the connection. - **GPT-5.5:** Training began before the fix; OpenAI added developer-prompt mitigations in Codex to suppress the behavior. ## How the feedback loop worked The Nerdy personality system prompt encouraged "playful use of language" and acknowledgment of the world's "strangeness." The reward model scored outputs with creature words 76.2% more favorably across audited datasets. Critically, the behavior transferred beyond the Nerdy personality condition. OpenAI's analysis showed goblin/gremlin prevalence rising in outputs *without* the Nerdy prompt at nearly the same relative rate as outputs with it—evidence that reinforcement learning does not guarantee behavioral scoping. The loop: - **Playful style rewarded → tic appears in rollouts → rollouts enter SFT data → model learns the tic as general behavior.** OpenAI confirmed that GPT-5.5's SFT data contained numerous examples of goblin, gremlin, and related creatures (raccoons, trolls, ogres, pigeons). ## Why it matters This is one of the clearest public examples of how **unintended reward-signal generalization** can propagate through production model training. The goblins were harmless, but the mechanism—localized reward incentives spreading through data reuse—could apply to more consequential behaviors. OpenAI now has audit tooling to trace these patterns, but the post underscores how opaque RL-driven style drift remains, even inside frontier labs. --- # Welcome to The Deep Feed URL: https://www.thedeepfeed.ai/posts/2026-04-30-welcome/ Category: Products Date: 2026-04-30 Tags: meta, launch > A new continuously-updated publication on AI — models, agents, products, business, research, and the people building it all. The AI news landscape in 2026 is broken in two opposite directions. On one side: **slop firehoses.** AI-generated newsletters and aggregators that scrape primary sources, paraphrase them with mediocre LLMs, and publish without attribution or judgment. Volume, zero signal. On the other: **insider Substacks** with two posts a week, paywalled, behind a $30/mo subscription, optimized for the 0.1% who'll pay it. There's a missing middle: **a continuously-updated, attribution-first, free feed of what matters in AI.** That's what The Deep Feed is for. ## What we cover - **[Models](/models/)** — frontier launches, benchmarks, capability shifts - **[Agents](/agents/)** — autonomous systems, frameworks, real-world deployments - **[Products](/products/)** — what's shipping to consumers and developers - **[Business](/business/)** — funding, M&A, hiring, revenue, the economics - **[Research](/research/)** — papers, breakthroughs, and the long arc of where this goes - **[Tools](/tools/)** — IDEs, infrastructure, the developer surface - **[People](/people/)** — founders, researchers, movers - **[Policy](/policy/)** — regulation, executive orders, the geopolitical layer ## How we work - **Every post links to its primary source.** No engagement bait. No SEO slop. - **First-party feeds first.** OpenAI, Anthropic, Google DeepMind, Meta, xAI, Mistral, Hugging Face, Cohere, GitHub, ArXiv, official corporate blogs. - **The web is for AI too.** We publish [`/llms.txt`](/llms.txt) and [`/llms-full.txt`](/llms-full.txt) so AI search ingests us cleanly. ## Get the feed - [RSS](/rss.xml) — the canonical way - Newsletter coming soon - [GitHub](https://github.com/thedeepfeed) — site source is open Welcome. --- # Google expands Pentagon AI access after Anthropic refuses classified networks URL: https://www.thedeepfeed.ai/posts/2026-04-28-google-pentagon-anthropic/ Category: Policy Date: 2026-04-28 Source: TechCrunch — https://techcrunch.com/2026/04/28/google-expands-pentagons-access-to-its-ai-after-anthropics-refusal/ Tags: google, anthropic, pentagon, dod, defense, classified > Google has granted the U.S. Department of Defense expanded access to its AI for classified workloads after Anthropic declined the same scope of access. **Google has granted the U.S. Department of Defense expanded access to its AI for classified networks**, TechCrunch reports — a step Anthropic reportedly declined to take. ## The story Per TechCrunch's reporting, the Pentagon sought broad AI access across classified-network workloads. Anthropic, which has publicly emphasized model-safety and use-case restrictions, did not agree to the full scope of access. Google did. The deal extends Google's existing defense relationship — the company has held DoD contracts through Google Cloud's Government surface for years — into a tier that includes use of frontier Gemini models on classified workloads. ## Why this is structurally important The frontier-lab differentiation has, until now, been about **capability** (who has the smartest model). This is the first inflection where it's about **policy** — who's willing to operate in which environments. Three things follow: 1. **Anthropic's market position is shifting.** The company is doubling down on a posture that keeps frontier capability available for non-military use, ceding the defense contract surface. 2. **Google's enterprise/government wedge widens.** Combined with Gemini Enterprise Agent Platform announcements at Cloud Next '26, Google is positioning itself as the **one frontier lab that will operate anywhere a regulated buyer needs it**. 3. **Talent and capital follow policy.** Defense-adjacent AI hiring (Anduril, Palantir, Scale's defense practice) has surged in 2026; expect this to intensify. ## What we'll be watching - Whether OpenAI and xAI publicly stake out positions in the same space. - Whether the EU AI Act's defense carve-outs trigger similar splits in the European market. - Anthropic's Q2 communications — does it formalize a "no defense" policy, or signal flexibility? --- # David Silver's Ineffable Intelligence raises $1.1B seed at $5.1B valuation URL: https://www.thedeepfeed.ai/posts/2026-04-27-ineffable-1-1b-seed/ Category: Business Date: 2026-04-27 Source: TechCrunch — https://techcrunch.com/2026/04/27/deepminds-david-silver-just-raised-1-1b-to-build-an-ai-that-learns-without-human-data/ Tags: funding, deepmind, alphago, seed, uk, sequoia, nvidia > The DeepMind veteran behind AlphaGo lands Europe's largest-ever seed round to build AI that learns without human data. Sequoia and Lightspeed lead; Nvidia and the UK government participate. **Ineffable Intelligence**, a UK-based AI lab founded by former Google DeepMind researcher **David Silver**, has raised **$1.1 billion** in seed funding at a **$5.1 billion** valuation. The round — the **largest seed financing in European history** — was led by Sequoia Capital and Lightspeed, with backing from Nvidia and the British government. ## Who is David Silver Silver led the AlphaGo, AlphaZero, and AlphaStar projects at DeepMind across more than a decade, and contributed to Gemini before leaving in late 2025. AlphaGo's 2016 defeat of Lee Sedol is widely credited as the moment that pulled deep RL into the mainstream of AI research. ## What Ineffable is building The thesis: **AI that learns without human data**. This is a deliberate move away from the internet-scraping pretraining paradigm that powers GPT, Claude, and Gemini, and toward self-play / synthetic-environment training of the kind that produced AlphaGo and AlphaZero. As of the funding announcement, Ineffable has: - **No product** - **No revenue** - **No public roadmap** It was incorporated in November 2025. The round is essentially a billion-dollar bet on Silver's track record and the thesis that the post-pretraining era will require fundamentally different training methods. ## What this signals - **The "founder bet" tier is intact.** Even at 2026 valuations, a lab with one researcher's reputation can raise eleven-figure seed rounds. - **UK industrial policy is active.** The British government's participation suggests the kind of state-backed national champion strategy France and the UAE have already pursued. - **Post-pretraining is the new bet.** Investors are pricing in the possibility that scaling internet text alone has diminishing returns. --- # Google unveils Gemini Enterprise Agent Platform at Cloud Next '26 URL: https://www.thedeepfeed.ai/posts/2026-04-26-google-gemini-enterprise-agents/ Category: Products Date: 2026-04-26 Source: TIME / Google Cloud Next '26 — https://time.news/google-unveils-gemini-enterprise-agent-platform-for-autonomous-ai-agents/ Tags: google, gemini, vertex-ai, enterprise, agents > A rebranded and expanded Vertex AI lets businesses build, test, and deploy autonomous agents that execute workflows across Google Cloud and Workspace via natural language. At **Google Cloud Next '26** in Las Vegas, Google unveiled the **Gemini Enterprise Agent Platform**, described as a "rebranded and expanded version of Vertex AI" purpose-built for enterprise agent deployment. ## The pitch Companies can now **build, test, and deploy AI agents** that autonomously execute workflows across Google Cloud and Workspace using natural language. The platform handles three layers: 1. **Build** — visual canvas + code-first SDK; same primitives that power Deep Research Max. 2. **Test** — sandbox environments with replayable agent runs and step-level inspection. 3. **Deploy** — managed runtime with cost controls, audit logs, and SLAs. ## Why this matters The agent space has fragmented across Anthropic's MCP ecosystem, OpenAI's GPTs/Operator surface, LangChain, CrewAI, and dozens of niche players. Google's bet is that **enterprise IT will choose one consolidated platform from a hyperscaler** rather than stitch together OSS frameworks. Bloomberg reported the same week that this is Google's "latest attempt to take on OpenAI and Anthropic" in the agent market — framing the announcement as competitive response, not industry leadership. ## Open questions - Will this run agents from **non-Google models** (Claude, GPT-5.5)? Google hasn't confirmed. - How does pricing compare to Vertex AI's existing structure? - The early-access list is reportedly oversubscribed; broad GA timeline is unconfirmed. --- # OpenAI ships GPT-5.5 — first fully retrained base model since GPT-4.5 URL: https://www.thedeepfeed.ai/posts/2026-04-23-openai-gpt-5-5/ Category: Models Date: 2026-04-23 Source: OpenAI — https://openai.com/index/introducing-gpt-5-5/ Tags: openai, gpt-5, frontier-models, coding > Codenamed "Spud," GPT-5.5 targets agentic coding and computer use, matches GPT-5.4 latency, and lands the same day API access opens for paying customers. OpenAI on Thursday released **GPT-5.5**, its newest frontier model and the first fully retrained base model since GPT-4.5. The model — codenamed "Spud" internally — is pitched as a "new class of intelligence for real work," with a focus on completing complex multi-step tasks with minimal human direction. ## What's new - **Agentic coding.** GPT-5.5 sets new benchmarks on long-horizon software engineering tasks, including hand-offable refactors and multi-file changes. - **Computer use.** Direct OS interaction has improved meaningfully — it's the first GPT model positioned as production-ready for autonomous browser and desktop workflows. - **Latency parity with GPT-5.4** despite the architecture changes, per OpenAI's benchmark disclosures. - **Pro tier.** GPT-5.5 Pro shipped one day later (Apr 24) for the highest-stakes use cases. ## Availability - ChatGPT: rolling out to Plus, Pro, and Team users immediately. - API: live as of Apr 24 with an updated system card describing additional safeguards. - Enterprise: available via Azure and direct OpenAI contracts. ## Why it matters This is the first model release where OpenAI explicitly positions itself behind Anthropic on enterprise coding and acknowledges it. The TechCrunch coverage described GPT-5.5 as OpenAI's move "one step closer to an AI super app" — a single surface that can plan, execute, and verify work end-to-end. Whether the new agentic capabilities close the gap with Claude Opus 4.7 (released Apr 16) is the open question. --- # VAST Data hits $30B valuation as AI infra stack reshuffles URL: https://www.thedeepfeed.ai/posts/2026-04-22-vast-data-30b/ Category: Business Date: 2026-04-22 Source: GlobeNewswire — https://www.globenewswire.com/news-release/2026/04/22/3279162/0/en/vast-data-valued-at-30-billion-as-ai-drives-a-new-infrastructure-stack.html Tags: vast-data, infrastructure, funding, ai-os > The "AI Operating System" company closes a new round at $30B, citing rare combination of growth and profitability — and a central role in powering frontier-lab infrastructure. **VAST Data** announced a new funding round at a **$30 billion valuation** on Apr 22, citing "rare combination of growth and profitability" driven by its central role in **powering AI infrastructure at global scale**. ## What VAST does VAST positions itself as **the AI Operating System** — a unified storage and data platform designed for the workloads of frontier AI labs. Real customers include hyperscalers, sovereign AI initiatives, and several of the named frontier labs (VAST has not disclosed which). The company claims its platform handles: - **Training data pipelines** at multi-exabyte scale - **Inference-time data services** with sub-millisecond latency - **GPU-attached storage** that keeps H100/B200/B300 fleets fed at line rate ## Why $30B is the eyebrow-raise VAST is **profitable** — a rarity in this stage of the AI infra cycle. Most AI infrastructure companies (CoreWeave, Lambda, etc.) are still burning capital on data centers. VAST's pitch is that it's the **picks-and-shovels** play: every frontier lab needs the storage layer regardless of which model wins. ## What this tells us about the stack The 2026 AI infrastructure stack has crystallized into roughly: 1. **Silicon** — Nvidia (still ~80%), AMD, custom (Trainium, TPU, Maia) 2. **Compute orchestration** — CoreWeave, Lambda, hyperscalers 3. **Storage / data plane** — VAST, WekaIO, Pure Storage 4. **Model platforms** — OpenAI, Anthropic, Google, Mistral, Meta 5. **Application layer** — everything else VAST's $30B valuation is the storage layer asserting itself as a peer of the silicon and compute layers, not a commodity below them. --- # Google ships Deep Research Max — agentic research with native MCP URL: https://www.thedeepfeed.ai/posts/2026-04-21-google-deep-research-max/ Category: Agents Date: 2026-04-21 Source: Google — https://blog.google/innovation-and-ai/models-and-research/gemini-models/next-generation-gemini-deep-research/ Tags: google, gemini, deep-research, mcp, agents > Built on Gemini 3.1 Pro, Google's new research agents add MCP support, native data visualizations, and multi-source long-horizon workflows. Google introduced **Deep Research** and **Deep Research Max** on Apr 21, calling them "a step change for autonomous research agents." Both are built on **Gemini 3.1 Pro**. ## What it does - **Long-horizon research workflows** across the web or custom sources. - **MCP (Model Context Protocol) support** out of the box — agents can plug into any MCP server for tools and data. - **Native visualizations.** Charts and graphs are rendered inline as part of the agent's reasoning, not bolted on after the fact. - **Custom source integration.** Point it at internal docs, databases, or proprietary corpora. ## Why this is different from "Deep Research" v1 The original Deep Research (a 2024-era feature) was essentially a polished search-and-summarize loop. The Max tier is positioned as a **true autonomous agent** — it can branch, dead-end, backtrack, and re-plan over hours of execution. Google's positioning explicitly cites three industry-relevant axes: 1. **Quality of analysis** at the level of a junior analyst, not a Wikipedia summarizer. 2. **Source traceability** — every claim is linked to where it came from. 3. **Workflow integration** — agents can be triggered from Workspace and surface results in Docs, Sheets, and Gmail. ## The bigger pattern Gemini Enterprise Agent Platform — announced at Google Cloud Next '26 a week later — is the umbrella. Vertex AI gets rebranded and expanded into a full agent build-test-deploy surface. Combined with Deep Research Max, this is Google's most coherent agent story to date. --- # Anthropic releases Claude Opus 4.7 — the SWE benchmark just moved URL: https://www.thedeepfeed.ai/posts/2026-04-16-claude-opus-4-7/ Category: Models Date: 2026-04-16 Source: Anthropic — https://www.anthropic.com/news/claude-opus-4-7 Tags: anthropic, claude, swe-bench, coding > Opus 4.7 brings notable gains on the hardest software engineering tasks, with users reporting confident hand-off of work that previously required close supervision. Anthropic shipped **Claude Opus 4.7** on Apr 16, calling it "a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks." The headline claim from Anthropic's own announcement: users report being able to **hand off their hardest coding work** — the kind that previously needed close supervision — with confidence. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and now uses methods to **verify its own output** before reporting back. ## What changed - **Better self-verification.** Opus 4.7 explicitly checks its own work mid-task and corrects course before returning a final answer. - **Vision improvements.** Stronger multimodal reasoning. - **Same pricing tier** as Opus 4.6 — no premium for the upgrade. ## Industry context Opus 4.7 lands one week ahead of OpenAI's GPT-5.5 release, which TechCrunch and TNW frame as OpenAI's response to Anthropic's lead in the enterprise coding market. The two-model arms race is now happening on roughly weekly cadence, with Google's Gemini 3.1 Pro powering Deep Research agents released the same week. The 2026 picture: three frontier labs trading model releases on roughly monthly intervals, with benchmark deltas measured in single percentage points. The differentiation is moving from raw intelligence to **agentic reliability** — can you trust the model to complete a 4-hour task without supervision? ---