Sort:  

First, the obvious: the company does not seek government guarantees for datacenters. Governments should not pick winners or losers, and taxpayers should not be expected to bail out firms that make poor business decisions.

If one firm fails, others will continue doing valuable work.

What could make sense is governments building and owning their own AI infrastructure, with any upside flowing to the government.

Governments might offtake large amounts of compute and decide how to use it, and it could be reasonable to offer lower-cost capital for that.

Creating a strategic national reserve of computing power is sensible, but it should benefit the public sector, not private companies.

One specific area where loan guarantees have been discussed is support for building semiconductor fabs in the US, where the company and others have responded to government requests (though no formal application was submitted).

The goal there is to make chip sourcing as American as possible to bring jobs and industrial capacity back, and to strengthen the US strategic position with an independent supply chain for the benefit of all American firms.

That is different from guaranteeing private datacenter buildouts.

There are at least three questions underlying the concerns.

First: how will the company pay for all this infrastructure? Current expectations are to finish the year above a $20 billion annualized revenue run rate with growth to hundreds of billions by 2030.

Commitments of roughly $1.4 trillion over the next eight years are being considered, which will require continued revenue growth and substantial work each step of the way.

There is confidence about prospects—new enterprise offerings, potential consumer devices, robotics, and harder-to-quantify areas like AI for scientific discovery all factor in.

The company is also exploring selling compute capacity to other organizations and individuals; global demand for an “AI cloud” is anticipated. Additional equity or debt could be raised.

All visible signals point to a need for far more computing power than currently planned for.

Second: is the company trying to become too big to fail, and should governments pick winners? The answer is no. If the company fails and cannot recover, it should fail, and others will continue serving customers.

That’s how the market and ecosystem operate. Plans are to be highly successful, but responsibility for failure lies with the company.

The chief financial officer discussed government financing and later clarified that the remarks could have been phrased more clearly.

Separately, when asked about the federal government becoming an insurer of last resort for AI risks, the response was that the government may ultimately act as a last-resort responder to catastrophic misuse—different from underwriting routine policies as with nuclear power, and not about bailing out companies or overbuilding infrastructure.

The concern there is intentional misuse of AI causing widespread harm—such as a coordinated cyberattack on critical infrastructure—that only the government could address. The government should not be writing insurance policies for AI firms.

Third: why spend so much now instead of growing more slowly? The company is building infrastructure for an AI-powered future, and massive projects take a long time—so planning needs to start immediately.

Current trends in AI use indicate the risk of insufficient compute is more likely and more consequential than the risk of overcapacity. Today, rate limits and withheld features exist because of severe compute constraints.

If AI enables major scientific breakthroughs that demand huge compute, readiness will be essential, and that future no longer seems distant.

The mission requires acting now to apply AI to hard problems—such as contributing to cures for deadly diseases—and to bring the benefits of advanced AI to people as soon as possible.

The aim is a world with abundant, low-cost AI, meeting massive demand and improving lives.

It is a privilege to attempt building infrastructure at this scale for something so important.

Given the available perspective, the company feels confident in this bet, but it could be wrong—and if so, the market, not the government, will address it