Researchers from Lancaster University have developed a new type of computer memory product that they say combines the best features of DRAM memory and SSD storage…. and then surpasses them.

Like flash storage, ULTRARAM is said to be non-volatile memory. That means it can save data indefinitely without consuming power. According to press materials, data stored in ULTRARAM should still be accessible after more than 1,000 years. But it’s also said to be faster than a typical SSD, with DRAM-like speeds (or faster), while consuming far less power than DRAM.

The technology behind ULTRARAM was actually first unveiled in 2020, and researchers published a paper in 2022 giving it a name and outlining its implantation.

But since then Lancaster University has spun off a startup company called Quinas Technology in hopes of commercializing the memory and bringing it to market. It’s been getting attention this month after winning a “Best in Show” award for “Most Innovative Flash Memory Startup” at the Flash Memory Summit.

For now it still looks like mass production of ULTRARAM could still be years away, and while the technology could eventually be used for personal computers, smartphones, and other consumer devices, it’s likely that it will first show up in data center products, which could benefit greatly from memory that consumes less power than traditional RAM, while offering faster-than-SSD data transfer speeds and non-volatile storage that would prevent data loss in the event of a power cut.

Of course, this isn’t the first time we’ve heard a company claim that it’s next-gen technology would bridge the gap between RAM and SSD storage. Intel once made similar claims about the 3D Xpoint technology underlying its Intel Optane memory products. But those were slow to come to market, never quite lived up to their promise, never particularly widely adopted, and eventually scrapped.

press release via TechRadar and PCGamer

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,547 other subscribers

Join the Conversation

9 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. If it is too expensive visavis solid state, then maybe it might take on the role of DRAM in SSD cache, which doesn’t need to be flushed so it can be much larger sized and achieve a much higher cache hit rate.

  2. I wonder how it is positioned for rewrites. One of the benefits of separate RAM, in addition to its speed, is that it can be thrashed with writes the way that a lot of software tends to do without wearing out. If you tried it with an M.2 SSD, it would significantly reduce its lifespan. Anyone who has tried using an SD card for swap space on an SBC knows how bad this can get, although SD and EMMC are unusually weak when it comes to frequent writing. If it’s comparable to SSD wearing, then it’s still most likely to be adopted as a faster SSD replacement subject to cost and speed numbers. If it is as resilient as traditional RAM, then we’re looking at something much more interesting.

  3. If this does make its way into production, this would be a good candidate for a unified memory technology.

    Basically the idea is that you could use the same memory to handle RAM and Storage, which would avoid the process of reading files in storage, and then writing them to RAM before being able to work with them.

    So far, Nvidia and Apple are the only ones who have pulled it off, but this memory technology could further improve it.

    Although, it does have the downside of offering less flexibility or upgradability. For example, Apple M1 and M2 products don’t allow the storage or RAM to be upgraded, because the unified memory is part of the SOC die.

    1. …If images like these right here indicate anything I’m pretty sure apple calls its RAM “unified memory” because the CPU and GPU can use all of it freely, and storage is a separate chip(s). And the memory is separate dies but they’re still integrated into the same package as the CPU before that gets soldered on.

    2. The storage on an M1-2 Mac is not part of the SoC, not integrated with the RAM, and could theoretically be replaced. It’s not replaced in practice because Apple solders it down (they’ve been doing that for a while) and stores a bunch of data on it which means putting in another set of chips won’t work until they’re imaged in some way, which is hard and expensive. Their unified memory thing explains why you can’t upgrade the RAM, but does not limit how they could have designed the storage assuming they weren’t interested in shortening the life of their hardware.

      1. Sorry, I meant to say that the RAM part of the unified memory is part of the CPU. The storage chips are indeed their own BGA packages soldered to the motherboard.

        Apple’s implementation of unified memory is that the storage and RAM share a controller, so they can communicate much faster.

        The ultimate form of unified memory will use a memory technology that can support combine both purposes into a single memory unit.

  4. Dang it, I really wish breakthroughs in computing materials would stop. By the time this gets to market for consumer electronics, assuming there’s anything left of that besides smartphones, there’s going to be so much bloat, from millions poorly skilled code monkeys copying together blobs of code from chatgpt-6 running on NVIDIA AI co-processors made with this stuff for memory (unless somehow an actual CUDA competitor appears), that getting windows 15 to run as responsively as 10 does will require this.
    It’s not as bad as the prospect of CPUs made partially from LK-99, which would pretty much compel a movement of all executable code to datacenter machines just due to the energy savings, but it’s still a major boost to AI rendering pretty much all somewhat intelligent people except for its rich owners redundant, and due for extermination.

    1. That’s enough internet for you today. Just enjoy what you have now, connect with nature, then come back when you’re recalibrated.