It is claimed that the memory provides 128GB/s of bandwidth while using electricity.
The Consumer Electronics Show (CES) is now taking place, as you may have heard. This means that every technology business is announcing how its goods benefit the artificial intelligence industry or how they employ AI. This is the case for Samsung, which has recently introduced a new type of high-bandwidth, low-latency PC memory that has the potential to compete with DDR5 in terms of speed. This certainly sounds promising; however, Samsung claims that it was created with one thing in mind—artificial intelligence—so it is not obvious whether it will replace DDR modules or serve as a low-power option for new applications.
In a press statement that was labeled as “editorial” and sent this week, Samsung discussed how its memory products will “harness the AI era.” A number of broad and unfounded assertions are made about artificial intelligence in the editorial, such as the following: “At home, it is making our lives easier and more enjoyable.” The juicy part, on the other hand, describes how it is working on memory goods that will enable artificial intelligence models to run on devices. Because of their scale and the amount of computing that they demand, these models will be able to run in the cloud, which will provide a significant problem for the industry. A sort of memory that we have never heard of before is revealed on the list of items that enable an artificial intelligence model to function locally rather than in the cloud. This type of memory is called Low Latency Wide I/O (LLW) DRAM, and it is manufactured by Samsung.
Regarding this enigmatic new type of memory, the editorial does not disclose a great deal of information. According to Tom’s Hardware, it only reports that it provides a bandwidth of up to 128 gigabits per second, which is comparable to that of a DDR5-8000 module. However, it is only capable of achieving that throughput with 1.2pJ/b, which is a relatively low amount and suggests that it may be geared at mobile devices such as smartphones and laptops rather than desktop home computers. In addition, there is no indication of the speed at which these modules are capable of accomplishing this feat of high bandwidth and low power consumption using these modules. We will have to wait for additional information regarding the products that it may be utilized for, as well as if it will be a solely enterprise offering or whether it will also be available for client apps.
Taking into consideration that these AI models will move away from the cloud and onto our devices, the rise of artificial intelligence will surely bring tremendous hurdles for businesses such as Samsung. Neural Processing Units (NPU) and tensor cores are examples of dedicated hardware that AMD, Intel, and Nvidia have begun to talk about in order to highlight the advantages of running artificial intelligence applications on-device. On the other hand, there is a lack of processing power and applications that can be run in order to operate them properly. LLW DRAM is one example of a potential solution that could be included in the family of technologies that are emerging to facilitate the movement of artificial intelligence models from the cloud to the device. Nevertheless, as is typically the case with everything of this nature, we will have to wait and see if this technology has any legs or if it will become the new metaverse by the time this year comes to a close.