New memories like MRAM, ReRAM, PCM, and FRAM are vying to replace embedded flash and, eventually, even embedded SRAM. In our Deep Look at New Memories webinar, our speakers Arthur Sainio, SNIA Persistent Memory Special Interest Group Co-Chair, Tom Coughlin of Coughlin Associates, and Jim Handy of Objective Analysis took a present and future look, explaining the applications that have already adopted new memory technologies in the marketplace, their impact on computer architectures and AI, the outlook for important near-term changes, and how economics dictate success or failure. If you have not yet watched the webinar, check it out in our SNIA Educational Library!
The audience was highly engaged and asked many interesting questions, some which were answered at the end of the webinar. However, we could not get to all of them, so our Q&A covers those that remained. Feel free to reach out to us at askcms@snia.org if you have more!
Q: How long will it take for a new memory to replace DRAM?
A: DRAM has a couple of things going for it that any prospective rival does not, and that’s the fact that it’s already produced in enormous volumes (about 20 billion chips per year) and it has more than five decades of learning behind it. Any rival will need to compete against DRAM on cost, and that will naturally take advantage of the new memory’s ability to go far beyond DRAM’s scaling limit, but the new memory will also need to be produced in high enough volume to supersede DRAM’s advantages in volume and learning. That’s going to take some time, but we think that the mid-2030s may see a transition underway.
Q: You talked a lot about MRAM applications, how are other new memories being used?
A. Our focus on MRAM is largely because of its widespread use right now. ReRAM is just beginning to find more applications and is a big focus of leading foundries. Panasonic introduced a ReRAM-based MCU way back in 2012, but they have been pretty alone. Another company that’s pretty alone STMicroelectronics, who ships the world’s only PCM-based MCU. Back when Intel was pursuing Optane we focused a lot of attention on PCM because Optane used PCM and ran in pretty high volume. From a unit-volume standpoint, though, FRAM beats all others in an extremely narrow application space: RFID fare cards for trains. These chips are really tiny, though, so they don’t consume many wafers.
Q. Will any of these memories move from back end of line to front end of line production and why or why not?
A. There’s a big advantage in being back end of line (BEOL) that has been used to reduce the cost of 3D NAND. With BEOL you can build the memory bits on top of the support logic to make a significantly smaller chip. Today companies are just starting to migrate from that to a hybrid-bonded approach, where two wafers on two different process lines are used, one to make the bits and one to make the logic. A BEOL-friendly bit cell lends itself to this approach too.
Q. What role could these new non-volatile memories play in chiplet technology/heterogeneous integration?
A. Future processors will have a logic chip for the processor and supporting chiplets for memory, whether it’s firmware memory, scratchpad memory, or a cache. These new technologies can support all three, although most of today’s research is focused on slower versions that won’t be too useful for the lowest-level caches closest to the processor. Over time we expect that to change, too.
Q. What role will these memories play in CXL-based memory systems?
A. CXL provides a wide variety of solutions to computing architecture. It supports memories of all kinds: Fast & slow, volatile & nonvolatile, byte write and block erase. Both CXL and NVMe can support any of these memory types, but NVMe is not fast enough to take advantage of really fast memories, so CXL is likely to be used in systems that need NVMe-like support at speed significantly faster than NVMe.
Q. How important is radiation resistance in memory?
A. That’s a tough question, because radiation barely makes a difference to a PC or smart phone user, but it’s a “Make or Break” issue for aerospace and certain other applications. There’s radiation everywhere, and it corrupts bits. Sometimes that just means that your PC bombs, resulting in “vocabulary enrichment” as you reboot. If a program bit is lost in a deep-space satellite, it’s likely that the entire billion-dollar mission will become a total loss. Also, there’s a lot of radiation in space, but the earth’s atmosphere absorbs a lot it before it gets down to the surface, so it’s less of an issue down here than it is in space.
Radiation can have a significant impact on DRAM memory used in networking equipment. It can cause bit flips referred to as Single Event Upsets (SEUs) which requires network equipment to be restarted. Using memory that has more resistance to radiation is beneficial in this case. If you are interested in this topic, check out a blog post from Jim Handy on memory issues in space medical applications.
Q. How much are these various new memories affected by electromagnetic fields?
A. They’re not as susceptible to stray fields as many people think. While you can’t put an MRAM inside of the powerful magnetic coil of an MRI imager, a lot of other common sources of magnetism are not a concern. Jim Handy’s working with MRAM makers and leading MRAM researchers to put together a table that illustrates where this stands in real-life terms that anyone can understand.
Q. What sort of manufacturing volume will new memory need to replace DRAM, assuming it had similar performance?
A. Based on the NAND flash crossover with DRAM prices in 2004, and upon Intel’s trouble getting Optane costs competitive with DRAM, despite Optane’s significantly smaller die size, Objective Analysis estimates that the wafer volume of a competing technology must come within an order of magnitude of that of DRAM’s for its costs to fall below DRAM’s.
Q. How will AI affect new memory demand, both in the data center and for consumer applications?
A. For anything to play a part in AI or any other computing application, it must provide a compelling cost advantage over more established technologies. In the data center that cost includes energy costs as well as the cost of the computing equipment itself, so if a slightly more costly technology can reduce energy costs so much that the total cost of ownership (TCO) is reduced, then it will find broad acceptance. These technologies aren’t there yet. Portable applications are somewhat different, because these memories can often reduce the cost of the system’s battery, creating a lower TCO than established technologies can do.
Q. You mentioned a 10nm limit for DRAM, are there ways that DRAM might get around that limit–such as with 3D memory?
A. DRAM’s kind of 3D already, so the benefit of turning it on its side as the industry did with NAND flash doesn’t bring DRAM anywhere near the benefits that the 3D switch caused for NAND flash. One very promising approach is to take some of the FRAM materials and use them to shrink the DRAM’s capacitor, but if you do that, you may as well just build an FRAM. Another possibility is to convert to a gain cell, which uses 2-3 transistors to replace the DRAM’s 1-transistor, 1-capacitor cell. One huge advantage of the gain cell is that it can shrink with the process, rather than being limited by the size of the capacitor. It’s early in the game, though, and although we are certain that an ingenious solution will get us past this hurdle, it’s too early to tell what that solution will be.