It’s no secret artificial intelligence is killing the planet. ChatGPT consumes a half-million kilowatt hours of electricity daily. That’s 17,000 times the daily usage of an American household.
Just as the mood around AI begins to sour, Nvidia announced its latest GPU, the Blackwell, that cranks out 20 petaflops at 25% of the power of its previous processor. The previous processor topped out at 4 petaflops for comparison. A petaflop is 1 million million floating operations per second. That’s a lot of flops while still waiting for Microsoft Word to load.
Nvidia is releasing a server architecture around the Blackwell called the GB200 NVLink 2. This architecture will be offered by Google, Amazon, and Microsoft through their cloud services platforms. Amazon has announced plans to build a server cluster using 20,000 Blackwell processors. Nvidia hasn’t release cost information for the next chip. It’s previous, Hopper processor lineup cost $20,000 per chip with a server architecture costing $100,000. This makes it tough to get in time for Christmas.
More Flops for More Parameters
Nvidia is claiming servers built around the GB200 can support AI models training with 27 trillion parameters. That’s a huge scale over the 1.7 trillion parameters GPT-4 was trained on.
There’s no mention of responsibly training these models using the GB200. I guess Nvidia will reserve that message for another day.