fbpx
 +1 386-243-9402 MON – FRI : 09:00 AM – 05:00 PM

Growth of AI

When we think of growth in business or technology, we probably think of 5%, 10%, 20%, sometimes 100%. When it comes to AI, think 10,000X. What is going on here? In this blog post, I will explore some of the limitations which may make such explosive growth challenging. This post is not about the uses of AI, which are growing rapidly, but rather about the infrastructure which makes that growth possible. Jason Dorrier at singularityhub wrote about this in “AI Models Scaled Up 10,000x Are Possible by 2030, Report Says”.

Recent advancements in AI can largely be attributed to a simple factor: scale. The word scale has taken on new meaning in recent years. It used to refer to a meat scale or a scale of numbers. In today’s tech world, scale refers to a company’s rapid growth. The rapid growth requires a significant scaling up of its operations, which involves increasing production capacity, hiring more staff, and expanding into new markets. With regard to AI, increasing the size of AI algorithms and feeding them more data, AI companies have achieved remarkable improvements in performance and capabilities. This trend has been evident since the early 2010s, with models growing exponentially larger each year. In the last few years, the growth has increased even more. It is scaling up.

One term related to AI which many people may not comprehend is training of models. I’ll take a crack at making this more understandable. In the world of AI, a model is a digital representation of a real-world system or process which can be used to make predictions or decisions. A real-world example would be chatGPT or Google’s Gemini. Training a model is like teaching a child. You provide the model with vast amounts of data (examples), and it learns patterns and relationships within the data. This process helps the model develop the ability to make predictions or decisions on new, unseen data. For example, if you train a model on thousands of images of cats and dogs, it will learn to distinguish between the two and can then accurately classify new images as either a cat or a dog with more accuracy than by humans. Training with data from the Internet makes generative AI possible with the potential to revolutionize many industries, from art and design to healthcare and education.

The catch is scaling the training comes at a significant cost. Training larger AI models requires immense computing power, which consumes vast amounts of energy and resources. The increasing demand for AI chips and data centers has strained existing infrastructure and raised concerns about sustainability.

Despite these challenges, there is still potential for continued AI scaling. By addressing constraints such as power consumption, chip availability, data scarcity, and latency, researchers believe AI models can become even more powerful and capable in the coming years. Now I will comment on each of the four limiters to the scaling.

One of the biggest constraints to AI scaling is power consumption. Training large AI models requires massive amounts of energy, which can strain power grids and contribute to greenhouse gas emissions. According to non-profit research institute Epoch AI, training the most advanced and powerful AI models in 2030 could require 200 times more power than today’s state-of-the-art algorithms.

To address this challenge, researchers are exploring ways to improve the energy efficiency of AI training. This includes developing more efficient hardware, optimizing algorithms, and exploring alternative energy sources. In my opinion, the only way to meet exploding demand for electricity from AI, Bitcoin mining, and growth in U.S. manufacturing is nuclear. More on that in another blog post.

Secondly, the availability of specialized AI chips such as those made by Nvidia is another critical factor limiting AI scaling. The demand for these chips has skyrocketed in recent years, leading to shortages and increased prices. Nvidia and some emerging competitors believe they can meet the demand. Nvidia has a market capitalization of $3.3 trillion as of this writing. To ensure a steady supply of AI chips, manufacturers are investing in expanding their production capacity and developing new technologies. Additionally, researchers are exploring alternative hardware options, such as neuromorphic chips, a promising new technology which may be more energy-efficient and better suited for certain AI tasks.

The third limitation is AI models require vast amounts of data to train effectively. While there is a wealth of data available online, the quality and diversity of this data can vary significantly. I believe as AI models become more sophisticated, there will be much more specialized and annotated data. An example is anonymized electronic health records of millions of patients. To address data scarcity, researchers are exploring techniques such as data augmentation, synthetic data generation, and transfer learning. These methods can help to expand the available dataset and improve the performance of AI models.

Finally, the size and complexity of AI models can also introduce latency, which can slow down the training process and limit the scalability of AI systems. As models grow larger, the time it takes for data to flow through the neural network increases, leading to longer training times. To address latency, researchers are exploring techniques such as distributed training and hardware acceleration. These approaches can help to parallelize the training process and reduce the overall training time.

Despite the challenges, there is a strong belief AI scaling will continue to drive significant advancements in the field. By addressing the constraints I outlined, researchers and developers can unlock the full potential of AI and create even more powerful and capable systems. At this point, investors and venture capitalists are very bullish.

In conclusion, the future of AI is closely tied to the AI company’s ability to scale models effectively. While there are significant challenges to overcome, the potential benefits of continued scaling are immense. By addressing the constraints of power, chips, data, and latency, the industry can pave the way for a new era of AI innovation and applications. In my mind, the big question is whether consumers and companies will be willing to pay AI companies enough for them to recover their scaling investments.

More about AI on my website at johnpatrick.com.

Note: I use Gemini AI and other AI chatbots as my research assistants. AI can boost productivity for anyone who creates content. Sometimes I get incorrect data from AI, and when something looks suspicious, I dig deeper. Sometimes the data varies by sources where AI finds it. I take responsibility for my posts and if anyone spots an error, I will appreciate knowing it, and will correct it.