Exponential AI

AI is improving exponentially. But will that continue? Will compute, data, and algorithms become limiting factors for AI advancement?

Exponential AI
AI is improving exponentially, and that seems unlikely to slow any time soon

How quickly is AI advancing? Exponentially.

Will it slow down any time in the near future? Very unlikely.

As I wrote separately, despite the extraordinary capability of today’s AI models, most experts agree they lack some fairly fundamental capabilities required to qualify as AGI.1 But how long will that last? If, as many have suggested,2 AI is improving exponentially, that won’t be true for long.

Human beings aren’t great at understanding the power of exponential growth. Looking backwards, it appears modest. Looking forward it looks vertical. But all we tend to see is what’s behind us. That’s why humans aren’t good at recognizing exponentiality when we see it, either.

From Tim Urban’s awesome 2015 post on exponential AI

Continued exponential growth of AI would have staggering implications for our economy and society. Therefore, it’s worth exploring how likely exponential growth in AI capabilities will be for the next 10+ years.

Generative AI progress to date

Generative AI dates back to the 1960s with the ELIZA chatbot, designed to provide mental therapeutic support. You can test it out yourself, but I’ll give you the preview: it’s not great. In fact, it’s so bad that in a recent test, users thought it might actually be a human because AI would have trouble performing so poorly.3

User input is preceded by an asterisk

Advances in the late 1980s (Recurrent Neural Networks) and late 1990s (Long Short-Term Memory) allowed applications such as speech recognition and translation,4 albeit at quality levels that would seem quite crude today. But those advances didn’t seem to translate well to generative AI. The A.L.I.C.E. chatbot from 1995 isn’t much better than ELIZA from thirty years prior. Fast forward another ten years to 2005, and Jabberwacky came out. It’s not much better than ELIZA. Arguably, it’s worse.

User input is the lighter font

Then in 2014, the concept of GANs (Generative Adversarial Networks) was described in a research paper.5 Transformer models were introduced in 2017 (the core technology upon which OpenAI is based).6 

In June 2018, OpenAI released Generative Pre-trained Transformer 1 (GPT-1), the first of their large models. Subsequent releases of GPT-2 and GPT-3 were met with modest fanfare. It wasn’t until November 2022 with the release of ChatGPT that transformer-based chatbots truly captured the public imagination. ChatGPT represented a combination of significant computational budgets and architectural improvements. 

The qualitative difference between ChatGPT and earlier chatbots such as ELIZA and Jabberwacky is hard to exaggerate. For example, in early 2023, a research paper showed that ChatGPT (not v4, but the earlier 3.5 version) was able to pass the US medical exams.7

For over 50 years, the pace of change in generative AI had been modest. But then new models, combined with exponential growth in computational speeds, allowed new AI models to obliterate the prior best-in-class models. 

I wrote a piece in December 2022 about The AI Inflection Point, which described how far reaching AI’s capabilities had become:

“Machines continue to increase the scope of analytical tasks they perform better, faster, and cheaper than humans.
Now they’ve come for our creative capabilities—writing, problem solving, art, photography, design, research, and more. And they’re quickly making inroads on social skills. In many cases, they’re already as good as—or better than—most of us.”

I also described how the CICERO AI agent from Meta was able to compete at the 90th percentile against the best human players at the game of Diplomacy, which is purely rooted in interpersonal, social, and negotiation skills. 

Since then, the rate of improvement has been incredibly rapid. GPT-4 was released about four months after ChatGPT, and represented a substantial improvement. And it’s widely rumored that GPT-5 is already in testing internally at OpenAI, and will represent an order of magnitude improvement over the currently state-of-the-art GPT-4. 

The breadth of capabilities of generative AI is also surprising, expanding to multiple modalities including images, videos, 3D scenes, sound, and more. And “generative” is becoming something of a misnomer given that these models handily process and make sense of multi-modal inputs as well. 

The quality improvements at Midjourney, an image generation AI company, over the course of a single year are representative of the pace of advancement:

MidJourney evolution over the course of a single year

More recently, Google’s AMIE beat human doctors in terms of quality and empathy in dealing with patients.8 In Q1 2024, OpenAI released Sora, an AI video generation technology that already generates surprisingly good video. Similar advances are being made in 3D, virtual environments, and more.

Another surprising feat of AI was achieved by Claude Opus, by Anthropic. Opus was able to answer GPQA (domain expert questions in biology, physics, and chemistry) at roughly 60% accuracy. PhDs in the domain being asked about—with access to the internet—typically achieve about 34% accuracy.9 In other words, chemistry PhDs got about 34% of the chemistry oriented questions correct. 

AI’s ability to perform common sense reasoning is improving very quickly as well. The Visual Question Answering challenge began in 2015, at which point best-in-class AI achieved a score of 55.4% versus 80.8% for humans. By 2022, AI beat humans with a score of 84.3%.10

AI can already create extraordinary content, make informed decisions in incredibly complex scenarios, and leverage social skills to accomplish its goals. It already handily beats the average human on standardized tests ranging from the SAT, to the bar exam, to the Advanced Sommelier exam—and even beats human PhDs in their fields of expertise. It’s even beginning to beat humans in some tests of common sense reasoning. 

As TIME Magazine put it, “AI has surpassed humans at a number of tasks and the rate at which humans are being surpassed at new tasks is increasing.”11 It certainly appears that the pace of recent improvement fundamentally exceeds everything that came before it; recent progress looks vertical. 

But will that continue?

Three factors driving AI advancement

As OpenAI described, three factors drive the AI advancements:

“algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training.”12

Computational power

AI requires extraordinary computational power for training (creating a model) and inference (using the model). There are two underlying factors: 

Scalability (somewhat oversimplified) involves the maximum usable scale due to factors such as memory, parallelism, etc. 

Efficiency involves how much quality you can squeeze out within a given scale (which also implies quality to cost ratios). 

Efficiency and scalability are driven by a complex interplay of “scaling laws, data utilization, architectural innovations, training and tuning strategies, and inference techniques.”13

A 2024 research paper suggests that prior to 2010, the total compute used to train AI models was doubling every 21.6 months, but began doubling every six months thereafter.14 OpenAI estimates that the scale of compute applied to training any given best-in-class model is increasing about tenfold each year.15 VisualCapitalist shows something similar. ChatGPT-3 used 317 million petaFLOPs for training in 2020. By 2022, Minerva used 2.7 billion petaFLOPs.16 And as these models increase in scale, the memory and processor requirements for inference also tend to grow. 

That could be a problem, given that Moore’s Law appears to be running up against physical limitations in terms of the size of atoms—3 nanometer dies aren’t that much bigger than 0.2 nanometer silicon atoms.17 Fortunately, three dimensional dies, more cores, specialized architectures, and perhaps quantum computing are likely to permit the rate of core processing power to continue to grow as it has for decades.

We’re also seeing significant efficiency improvements beyond the chips themselves—data utilization, system architectures, algorithms, and more. ARK Invest predicts that for the foreseeable future “the convergence of hardware and software could drive AI training costs down by 75% [annually]… and inference costs seem to be falling at an annual rate of ~86%, even faster than training costs.”18

For example, while I was drafting this article, Nvidia announced their Blackwell architecture, which reportedly offers 30x speed improvements and 25x efficiency improvements over prior hardware (although this needs to be tested in the real world).19 Further evidence comes from TSMC, which projects a more than 1,000x improvement in GPU energy efficiency within the next fifteen years.20 There’s clearly an arms race for AI compute, and it shows no sign of slowing any time soon.

Efficiency improvements seem likely to continue for the foreseeable future with, for example, compute efficient models such as Cerebras combined with fundamental architectural improvements involving more top-down model development rather than the more brute force bottom-up approach currently in use. 

Overall, it seems likely that computational power and efficiency will continue to contribute to exponential growth of AI progress.

Algorithms

One sign of progress in AI algorithms is the onslaught (it’s hard to describe it otherwise) of research papers on AI. The worldwide volume of AI scholarly papers published grew from about 250,000 in 2016 to almost 500,000 in 2021 (Our World In Data)—that’s over 1,300 papers per day. And the growth in AI patents is much more rapid: 79.6% compounding annual growth according to Stanford HAI’s 2022 report.21

But papers and patents don’t prove algorithmic improvement. More useful would be to understand how much algorithms are improving beyond the underlying computational power available.  In 2020, the DiggingDeepAmidstChaos blog pointed out that in 1997, AI beat the best human chess player. About twenty years later, in 2016, AlphaGo beat the best Go player in the world. Go is about a googol (1 followed by 100 zeroes) times more complex than chess. Improvements since 1997 in compute power only account for about 12 of those zeroes (one trillion times improvement). So the capability of the models and systems has likely grown about 1 followed by 88 zeroes times more capable over that time period.22 That’s mind boggling. It’s like comparing a grain of sand to the entire breadth of the known universe. 

More broadly, it’s estimated that algorithms alone are contributing to a doubling of AI capabilities every 9 months.23 And generative AI algorithms are doubling somewhat more quickly at a pace of every 8 months.24 It seems clear to me that AI algorithms have been improving at an exponential rate, and show no signs of slowing down. 

One counterargument can be found in Stanford HAI’s annual reports, which have reported a leveling off of AI performance in certain tests.25 But, as they point out, that’s because the AI appears to be reaching the limits of some tests—to the point where it’s outperforming the average human. As a result, new tests are being devised, with an obvious broadening of the modalities and capabilities being tested. 

Overall, it’s hard to argue that algorithmic and efficiency improvements will be likely limiting factors to AI progress any time soon.

Training data

That leaves high-quality training data. Will generative AI advances slow due to insufficient supply of training data? ARK Invest argues this is unlikely to happen any time soon, pointing out the potential utility of 30 quadrillion words spoken annually and data generated about the physical world by autonomous vehicles and robots.26 

Beyond spoken words and IoT, another potential solution is synthetic data—artificially created data that is designed to mimic the real thing.27 In theory, artificial data is limitless, although there are concerns about whether it would result in lower quality training. It’s interesting, however, that a number of research papers—including research from MIT28 in 2022—suggest that in at least some cases, synthetic data works better than real data.

And perhaps more importantly, the scale of training data required may decline over time as algorithms improve. Future systems will likely use a top-down approach—versus the current bottom-up approach—and thus won’t require as much information.29 Eliezer Yudkowsky is a controversial figure in AI, but he makes a strong argument that “AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than [a] human would be able to learn from less evidence than humans require.”30 

DatologyAI presents another approach that might reduce the amount of data required. By reordering the data inputs to enhance connections, they estimate models can achieve similar outcomes with half the data.31

Research from Epoch does suggest that current AI approaches are likely to run out of quality language data by 2026, but even the authors of the report said there was realistically about a 20% chance of that occurring.32 The general consensus seems to be that there is unlikely to be a data problem any time soon.33

Of the three legs of the AI progress stool, access to data does seem to be the weakest link. But my prediction is that a combination of further data sources, synthetic data, and less data-intensive training models will mean that high-quality data won’t end up being a limitation for continued AI progress. 

In other words, it seems clear to me that AI will continue to grow at an exponential pace of the foreseeable future. 

What should we expect going forward?

As I said, it's difficult for humans to understand exponential growth. Perhaps some quick math will help. If compute continues to double every six months, and algorithm quality doubles every nine months, then we might expect roughly:

  • 15,000x improvement in AI within 5 years
  • 220,000,000x improvement in AI within 10 years

Now do you get why I'm ringing the bell?

The biggest question in my mind is when we can anticipate AGI (Artificial General Intelligence) or something similar. Despite some arguments to the contrary,34 the general consensus is that GPT-4 does not represent AGI, which would require better abstract thinking, common sense, cause and effect reasoning, and a deep understanding of the world, among other capabilities.35 

AGI would mean that AI could learn to accomplish any intellectual task that human beings or animals can perform, and surpass human capabilities in the majority of economically valuable tasks. Some theorize that AGI is impossible, but they’re increasingly in the minority. 

In late 2019, the Metaculus prediction market timeline for the release of an AGI system was 50 years. By late 2022, it had dropped to 18 years. As of early March 2024, it has dropped to 7 years (AGI by 2031). 

ARK Invest’s 2024 Big Ideas report pointed out that based on forecast errors to date in that prediction model, the real date for AGI might be closer to 2026. Those forecast errors track with other predictions over time. McKinsey’s predictions for timeline and level of impact from AI on jobs continue to shorten / increase with every new report.

Daniel Kokotajlo, an OpenAI researcher, apparently believes we’ll have AGI any year now, including a 30% chance of it arriving this year (2024). 

Metaculus prediction markets think that once the first weak AGI is released, it will take a mere 30 months to achieve ASI. ASI in this case is defined as “an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain.”

A recent survey36 of over 2,700 AI researchers who had published in top-tier AI venues predicted even odds of HLMI (High-Level Machine Intelligence) arriving by 2047 (in 23 years). HLMI means a machine can do any task, physical or otherwise, as well or better than a human. It’s also worth noting an apparent forecast error in the survey: the estimate was down thirteen years from 2060 in the 2022 survey. For comparison, in the six years between the 2016 and 2022 surveys, the expected date moved only one year earlier, from 2061 to 2060. 

I think the synthesis of all of this is that AGI is coming, and it’s likely sooner than most think. We’re seeing hints that the next release of Open AI’s chatbot might get us much closer. Perhaps that’s one of the reasons for the brief expulsion of Sam Altman from the company—by a non-profit board perhaps troubled by positive progress reports. It’s interesting that rumors about Q-learning algorithms were making their way around the AI community at that time (and since). 

Integration of Q-learning into LLMs could equip them with more robust reasoning and planning capabilities. By learning through interaction and feedback, models may improve their ability to handle complex, multi-step inference problems. If effectively applied to an AI system, this could permit some of the improvements required to bring AI closer to AGI.

Conclusion

AI is progressing at an exponential pace, and it seems unlikely to slow any time soon. AGI could  be 20+ years out—but it could also arrive any year now, and upon arrival would have extraordinary consequences for our economy and society.


  1. HAI, “Artificial Intelligence Index Report 2023”, Stanford University
  2. Including: Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, Bill Gates, Stephen Hawking, Stuart Russell, and Vint Cerf
  3. Evanson, Nick. “A Chatbot from the 1960s Has Thoroughly Beaten OpenAI’s GPT-3.5 in a Turing Test, Because People Thought It Was Just ’too bad’ to Be an Actual AI”. Pcgamer, PC Gamer, 6 Dec. 2023
  4. “The Rise of Generative AI: A Timeline of Breakthrough Innovations”. Wireless Technology & Innovation
  5. Goodfellow, Ian J., et al. Generative Adversarial Networks. 10 June 2014
  6. Vaswani, Ashish, et al. Attention Is All You Need. 12 June 2017
  7. Kung, Tiffany H., et al. “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models”. PLOS Digital Health, edited by Alon Dagan, vol. 2, no. 2, Feb. 2023, p. e0000198
  8. Research, Tao Tu Google, et al. Towards Conversational Diagnostic AI
  9. Rein, David, et al. GPQA: A Graduate-Level Google-Proof Q&A Benchmark. 20 Nov. 2023
  10. HAI, “Artificial Intelligence Index Report 2022”, Stanford University
  11. Henshall, Will. “Why AI Progress Is Unlikely to Slow Down”. Time, Time, 6 Nov. 2023
  12. “AI and Compute”. AI and Compute, https://openai.com/research/ai-and-compute.
  13. Tianyu, et al. The Efficiency Spectrum of Large Language Models: An Algorithmic Survey
  14. Sastry, Heim, Belfield, et al. “Computing Power and the Governance of Artificial Intelligence”. 14 Feb. 2024
  15. “AI and Compute”. AI and Compute, https://openai.com/research/ai-and-compute.
  16. Our World in Data Featured Creator Article/Editing: Pallavi Rao. “Charted: The Exponential Growth in AI Computation”. Visual Capitalist, 18 Sept. 2023
  17. Hajj, Ahmad El. “What Reaching the Size Limit of the Transistor Means for the Future”. Inside Telecom, 23 June 2022
  18. ARK Invest Big Ideas report 2024
  19. Benj Edwards - Mar 19, 2024 3:27 pm UTC. “Nvidia Unveils Blackwell B200, the ‘world’s Most Powerful Chip’ Designed for AI”. Ars Technica, 19 Mar. 2024
  20. Liu, Mark, and H.-S. Philip Wong. “How We’ll Reach a 1 Trillion Transistor GPU”. IEEE Spectrum, IEEE Spectrum, 5 Apr. 2024
  21. HAI, “Artificial Intelligence Index Report 2022”, Stanford University
  22. DiggingDeepAmidstChaos. “Is Current Progress in Artificial Intelligence Exponential?”. Medium, Medium, 4 May 2020
  23. Erdil, Ege, and Tamay Besiroglu. “Revisiting Algorithmic Progress”. Epoch, 12 Dec. 2022
  24. Ho, Anson, et al. “Algorithmic Progress in Language Models”. Epoch, 12 Mar. 2024
  25. HAI, “Artificial Intelligence Index Report 2023”, Stanford University
  26. ARK Invest Big Ideas report 2024
  27. Also mentioned in the ARK Invest Big Ideas report
  28. Adam Zewe | MIT News Office. “In Machine Learning, Synthetic Data Can Offer Real Performance Improvements”. MIT News
  29. Wilson, Daugherty, and Davenport. “The Future of AI Will Be About Less Data, Not More”. HBR. 14 Jan. 2019
  30. Yudkowsky, Eliezer. AGI Ruin: A List of Lethalities.
  31. Morcos et al, “Beyond neural scaling laws: beating power law scaling via data pruning”, Arxiv
  32. Villalobos, Pablo, et al. “Will We Run Out of ML Data? Evidence From Projecting Dataset Size Trends”. Epoch, 10 Nov. 2022
  33. Henshall, Will. “Why AI Progress Is Unlikely to Slow Down”. Time, Time, 6 Nov. 2023
  34. Bubeck, Sébastien, et al. Sparks of Artificial General Intelligence: Early Experiments With GPT-4. 22 Mar. 2023
  35. Sayeed, Md. Abu Mas-Ud. “Is GPT-4 Already Showing Signs of Artificial General Intelligence?”. LinkedIn, 24 May 2023
  36. Preprint of “THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI