As artificial intelligence (AI) rapidly works its complex magic on one sector of the economy after another, there is an increasingly pressing need for compute resources to power all this machine intelligence.
Training a model like ChatGPT costs more than $5 million, and running the early ChatGPT demo, even before usage increased to its current level, costs OpenAI around $100,000 per day. And AI is more than just text generation; applying AI to practical problems across multiple industries requires similar large neural models trained on a diversity of data types — medical, financial, customer information, geospatial and so forth. Moving beyond the limitations of current neural net AI toward systems with higher levels of artificial general intelligence will almost surely be even more compute intensive.