Researchers discovered a new way for enabling advances in Artificial Intelligence and Machine Learning. Memory is still one of the most important technologies for artificial intelligence/machine learning (AI/ML) processing to progress.
Bandwidth has played an important part in supporting innovative computing paradigms, out from rapid expansion of PCs in the 1990s to the proliferation of videogames in the 2000s and the introduction of mobile and cloud computing in the 2010s. Over the previous 30 years, the reminiscence business has responded to the needs of the industry, and now it is being asked to continue inventing as we approach a new era of AI/ML.
As users processed increasing volumes of data with applications like Word, Excel, and PowerPoint, PCs pushed an increase in memory bandwidth and capacity. Performance was pushed even farther by graphical user interfaces, the Internet, and gaming. Graphics DDR, a new type of memory designed to satisfy higher bandwidth demands, was born as a result of this.
With on-the-go computing, mobile phones and tablets ushered in a new age, and the necessity for long battery life prompted the memory industry to develop new Mobile-specific memories to fulfil these markets’ needs. Cloud computing is still driving capacity and performance advances to handle bigger workloads from linked devices.
In the future, AI/ML applications will drive the demand for higher memory performance, capacity, and power efficiency, posing various challenges to memory system designers. Between 2012 and 2019, AI/ML learning capability rose by a factor of 300,000, doubling every 3.43 months, according to OpenAI. Models and training sets for AI/ML are also getting bigger, with the largest models now topping 170 billion parameters and more bigger models on the way.
Maintenance of performance gains– While Moore’s Law has helped to fuel some of the increases in performance and model size, the problem is that Moore’s Law is slowing down. This makes it more difficult to maintain these types of performance gains. Improved memory systems, in addition to silicon enhancements, have contributed significantly to system performance gains.
It’s still early in this next chapter of the AI/ML revolution, and the desire for increased processing power shows no signs of decreasing. To continue the unprecedented advancements we’ve seen over the last 5 years, upgrades to every component of computer hardware and software will be required.
In results -Memory will continue to be critical to achieving continued performance gains. For AI/ML training and inference, HBM2E and GDDR6 provide best-in-class performance for training and inference, and the memory industry is continuing to innovate to meet the future needs of these systems.