In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them.
Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:
"It comes down to whether the customer that's using them . . . has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s.
"The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs.
"Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."
Learn more:
Navigating the High Cost of GPU Compute
Chasing Silicon: The Race for GPUs
Follow on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.