Monday, October 6, 2025

On Our "Virtual Route 66" Around the Tech Virtual Highway: Thoughts on Artifical Intelligence

Our team created a grid to inspire and engage in this era of Artificial Intelligence, a reality that is here and now.
We curated a selection of thoughts in line with our commitment to feature insights and ideas on Artificial Intelligence, including coverage of the Google AI Conference, a snapshot of the latest developments in Electronic Art, and an update on OpenAI's plans for a potential competitor to Amazon and closing out with a call to create redlines for AI: 


 



You might think the $55 billion buyout of Electronic Arts would be today's most significant business event. But that's yesterday's news—word of the deal broke on Friday. (For more on that, see below.) Today's biggest news, in reality, is OpenAI's launch of its shopping feature, which puts the creator of ChatGPT in competition with everyone from Amazon to Instagram. 

OpenAI unveiled what it calls an Instant Checkout feature, which it said was its first step "toward agentic commerce in ChatGPT." For the moment, there doesn't seem to be much that is "agentic" about this: People can search ChatGPT for information about a product and then buy it. Despite the announcement's hyperbole, the quick checkout feature is an essential step for OpenAI. Given that the company is allocating a significant portion of the global banking system's funds to data centers, this initiative promises to generate revenue, as merchants will pay a small fee on purchases. In particular, this service will help OpenAI find a way to generate revenue from the hundreds of millions of people who use ChatGPT without paying. 

How much should Amazon CEO Andy Jassy worry? Not too much—yet. What OpenAI is describing seems to be closer to Instagram than to Amazon: Once a consumer has clicked the buy button, the merchant handles everything else, from payments to shipping and returns. (Instagram doesn't offer a checkout button in its app, however.) Hilariously, ChatGPT claims to act as the "user's AI agent—securely passing information between user and merchant, just like a digital personal shopper would." ChatGPT appears to be applying the term "AI agent" to the type of transaction that has been occurring online for years. 

One thing Amazon has going for it is its seamless shipping experience. Amazon's customers know that when they order something on the site, it will arrive quickly and without problems. That cannot be said for every merchant shipping independently. Moreover, ChatGPT isn't the first company to try offering checkout—Instagram and Pinterest both attempted it, unsuccessfully.

The good news for Amazon is that it can use OpenAI's initiative as evidence to argue that the Federal Trade Commission's antitrust lawsuit against Amazon, scheduled to go to trial in early 2027, is outdated due to the introduction of new artificial intelligence services (henceforth referred to as the Google defense). 

EA's Exit

Shareholders in Electronic Arts can't complain about the deal announced on Monday. The $210-per-share offer, valuing EA at $55 billion, is the richest price ever paid for the company since it went public 35 years ago. For the past seven years, EA stock has generally traded as high as around $140, appreciating from there only this year. (For more on the deal, watch The Information's Cory Weinberg on TITV today.)

In multiple terms, that price translates to 49.4 times EA's fiscal 2025 earnings per share. For a company that isn't growing, that's not a bad number. Incidentally, it's the same price-earnings multiple that was implicit in Microsoft's purchase of EA rival Activision Blizzard in late 2023. Coincidence? 

In Other News

• Travel management platform TravelPerk hired investment bankers for an upcoming U.S. IPOThe Information reported.

• Anthropic on Monday released its latest model, Claude Sonnet 4.5, according to a blog post. The model outperforms OpenAI's flagship GPT-5 model on a popular software engineering benchmark and can work for more than 30 hours straight on complex, multistep tasks, according to the blog post.

• YouTube agreed to pay $24.5 million to settle a 2021 lawsuit from President Donald Trump over the suspension of his account after the Jan. 6 riots, according to a court filing on Monday.

Today in The Information's TITV

Check out today's episode of TITV in which a Snowflake product management executive tells us about its plan to end the AI corporate data wars.

Artificial intelligence is hungry for power at a scale that defies belief.

Last week, OpenAI and Nvidia said they would work together to develop 10 gigawatts of data center capacity over an unspecified period. Inside OpenAI, Sam Altman floated an even more staggering number: 250 GW of compute in total by 2033, roughly one-third of the peak power consumption in the entire U.S.!

Let that sink in for a minute. A large data center used to mean 10 to 50 megawatts of power. Now, developers are pitching single campuses in the multigigawatt range—on par with the energy draw of entire cities—all to power clusters of AI chips.

Or think of it this way: A typical nuclear power plant generates around 1 GW of power. Altman’s target would mean the equivalent of 250 plants just to support his own company’s AI. And based on today’s cost to build a 1 GW facility (around $50 billion), 250 of them implies a cost of $12.5 trillion.

“We are in a compute competition against better-resourced companies,” Altman wrote to his team last week, likely referring to Google and Meta Platforms, which also have discussed or planned large, multigigawatt expansions. (XAI CEO Elon Musk also knows a thing or two about raising incredible amounts of capital.) 

“We must maintain our lead,” Altman said.

OpenAI expects to exit 2025 with about 2.4 GW of computing capacity powered by Nvidia chips, said a person with knowledge of the plan, up from 230 MW at the start of 2024.

Ambition is one thing. Reality is another, and it’s hard to see how the ChatGPT maker would leap from today’s level to hundreds of gigawatts within the next eight years. Obviously, that figure is aspirational. 

Then again, OpenAI’s fast-rising server needs surprised even Nvidia executives, said people on both sides of the relationship.

Before the events of last week, OpenAI had contracted to have around 8 GW by 2028, almost entirely consisting of servers with Nvidia graphics processing units. That’s already a staggering jump, and OpenAI is planning to pay hundreds of billions of dollars in cash to the cloud providers who develop the sites. 

To put it into perspective, Microsoft’s entire Azure cloud business operated at about 5 GW at the end of 2023—and that was to serve all of its customers, not just AI. (Azure is No. 2 after Amazon’s cloud business.)

Bigger Is Still Better

Data center developers tell me most of OpenAI’s top competitors are asking for single campuses in the 8 to 10 GW range, an order of magnitude bigger than anything the industry has ever attempted to build.

A year and a half ago, OpenAI’s plan with Microsoft to build a single Stargate supercomputer costing $100 billion seemed like science fiction. Barring a seismic macroeconomic change, these types of projects now seem like a real possibility.

The rationale behind them is simple: Altman and his rivals believe that the bigger the GPU cluster, the stronger the AI model they can produce. Our team has been at the forefront of reporting on some of the limitations of this scaling law, as evidenced by the smaller step-up in quality between GPT-5 and GPT-4 than between GPT-4 and GPT-3. 

Nevertheless, Nvidia’s fast pace of GPU improvements has strengthened the belief of Altman and his ilk that training runs conducted with Blackwell chip clusters this year and with Rubin chips next year will crack open significant gains, according to people who work for these leaders.

In the early days of the AI boom, it was hard to develop clusters of a few thousand GPUs. Now firms are stringing together 250,000, and they want to connect millions in the future. 

That desire runs into a pretty important constraint: electricity. Companies are already trying to overcome that hurdle in unconventional ways, by building their own power plants instead of waiting for utilities to provide grid power, or by putting facilities in remote areas where energy is easier to secure. 

Still, the gap between company announcements and the reality on the ground is enormous. Utilities by nature are conservative when it comes to adding new power generation. They won’t race to build new plants if there’s a risk of ending up with too much capacity—no matter who is asking.

‘Activating the Full Industrial Base’

OpenAI’s largest cluster under development, in Abilene, Texas, currently uses grid power and natural gas turbines. But other projects it has announced in Texas will use a combination of natural gas, wind and solar. 

Milam County, where OpenAI is planning one of its next facilities, recently approved a 5 GW solar cell plant, for instance. And gas is expected to be the biggest source of power for the planned sites, this person said.

To accomplish its goals, OpenAI and its partners will need the makers of gas and wind turbines to greatly expand their supply chains. That’s not an easy task, given that it involves some risk-taking on the part of the suppliers. Perhaps Nvidia’s commitment to funding OpenAI’s data centers while maintaining control of the GPUs will make those conversations easier.

Altman told his team that obtaining boatloads of servers “means activating the full industrial base of the world—energy, manufacturing, logistics, labor, supply chain—everything upstream that will make large-scale compute possible.”

There are other bottlenecks, such as getting enough chipmaking machines from ASML and getting enough manufacturing capacity from Taiwan Semiconductor Manufacturing Co., which produces Nvidia’s GPUs. Negotiating for that new capacity will fall to Nvidia.

Predicting the future is notoriously difficult, but a lot of things will need to go right for OpenAI and its peers to get all the servers they want. In the meantime, they will keep making a lot of headlines in their quest to turn the endeavor into a self-fulfilling prophecy.

New From Our Reporters

Exclusive

Data Startup Fivetran In Talks to Buy Dbt Labs in Multibillion Dollar Deal

By Valida Pau, Katie Roof, and Kevin McLaughlin




Artificial Intelligence

How Jensen Huang is Using Nvidia Cash to Rule the AI Economy

By Anissa Gardizy, Nick Wingfield, Wayne M,a and Qianer Liu

 

200 world leaders demand AI “red lines”

🚨 Our Report 

Over 200 former heads of state, Nobel laureates, AI leaders, researchers, and scientists, and more than 70 AI organizations, have signed “The Global Call for AI Red Lines” initiative, which urgently calls for a global agreement on “clear and verifiable red lines” that AI should never cross, by the end of 2026, warning that AI’s “current trajectory presents unprecedented dangers.”

🔓 Key Points

  • Signatories include "godfathers of AI” and Turing Award winners, Geoffrey Hinton and Yoshua Bengio, OpenAI co-founder Wojciech Zaremba, Anthropic CISO Jason Clinton, and Google research scientist Ian Goodfellow.

  • The initiative’s goal is “to prevent large-scale, irreversible risks before they happen,” stating that if nations can’t agree on what they want to do with AI, “they must at least agree on what AI must never do.”

  • Although it doesn’t specify exactly what these red lines should be, it does give suggestions like not allowing AI to impersonate humans, autonomous replication of AI systems, and the use of AI in nuclear warfare.

🔐 Relevance 

Although many AI companies have called for unified safety measures and agreements, research shows that many are prioritizing profit and progress over safety and societal welfare. This is the fundamental reason behind this initiative, as responsible safety policies within companies “fall short of real enforcement” and an independent global institution “with teeth” is needed to define, monitor, and enforce the red lines. Professor of computer science at UC Berkeley, Stuart Russell, believes AI companies “can comply by not building AGI until they know how to make it safe…just like nuclear power developers did not build nuclear plants until they had some idea how to stop them from exploding.”

No comments: