Almost two years after Intel and Micron unveiled a new type of solid state storage said to be up to 1,000 times faster than flash, Intel is finally launching a commercial product that uses 3D XPoint technology.

It’s called the Intel Optane P4800X, and it’s a solid state drive designed for enterprise applications. It’s also got an enterprise price tag: the first version to ship will be a 375GB SSD that sells for $1520.

Eventually we’ll probably start to see consumer-oriented Intel Optane SSDs, but not until prices fall.

What makes 3D XPoint interesting is that it offers read and write speeds that rival what you’d expect from DRAM memory. But unlike DRAM, you can use 3D XPoint for long-term storage, since it’s non-volatile memory.

In other words: anything saved in DRAM will disappear when you turn off the power. Like other SSDs, the Intel Optane P4800X can store data indefinitely without using power.

The upshot is that not only can you do things like copy files and read data more quickly, but you can also use 3D XPoint as “virtual memory,” to get near-RAM speeds. AnandTech notes that customers currently have to pay extra for “Intel Memory Drive Technology” if they want to use Intel’s software for efficiently using its Optane SSD for virtual memory.

While the first Intel Optane SSD to ship will feature just 375GB of storage, Intel plans to launch 750GB and 1.5 TB models later this year.

via Tom’s Hardware

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

15 replies on “Intel launches its first Optane SSD with 3D XPoint memory (for enterprise)”

  1. Reports/reviews show it only helps for extreme random access situations. Almost all desktop/gaming users won’t notice any difference with this particular card. It’s just super expensive for a pittance of storage.

  2. Best use case for this today is caching.

    High speed, cost-effective at small sizes, and better endurance than NAND mean it’d be a great disk cache.

    In fact the endurance alone might make it cost effective. In one solution I work with the server is spec’ed with 128GB SSD for the cache, but only assigned ~64GB at any time. The rest is there for wear leveling over the server’s 5 year life. If I can get a faster, more reliable 64GB Optane for a similar price to 128GB NAND, I’d be very happy.

    1. I’d say make a new architecture around it instead.
      New CPU: direct access to this memory. Throw out the L2/3 cache, use it’s place to make a lot of registers. More registers make it easier to run multiple processes on the same core and make virtualization easier too. Cache on the other hand is only needed to prefetch from the relatively slow RAM, not needed anymore. Throw out branch-prediction, prefetch and long pipelines and ques, we only needed them to handle NOPs while waiting for memory, but not needed here. Use a simple RISC architecture. Instead make several cores that share the entire memory and hardware memory protection. RISC can do more instructions per clock.
      We need a new filesystem to go with this, since it’s both fast and non-violate it needs to be less robust, but instead of partitions it should work as a RAM-disk and should be resizeable at any moment, so you can dedicate your RAM to storage or to the CPU depending on what you do.
      Linux can be most likely modded to work on this.
      This would mean a computer that has less components and thus cheaper (with bigger volumes, the price for XPoint would drop), more power-efficient while also being faster and more robust and also easily scaleable.
      This technology will have more impact on our computers and sooner than quantum-computing. Instead of increasing the raw processing power, the last decade was about slowly removing all the bottlenecks from the computer, the last step was SSD-s becoming mainstream. Now that all the puzzle-pieces are ready it’s time to throw out all the legacy stuff we’ve inherited from the ’70-ies and make a new type of computer, where all the parts perform on the same speed, the CPU don’t have to wait for the RAM or the storage. Legacy software can just be recompiled, as long as the kernel handles the new type of RAM well, it wouldn’t change much from an average app’s standpoint, malloc will still give you some memory, and that’s it.

      1. Totally agree. And this could be what makes or breaks Optane.

        Even PCIE m.2 SSDs would be more useful if the existing storage tier architecture changed. This is evident in the fact that they are x4-5 times faster than a SATA3 SSD and yet real world performance improvements are minimal. The architecture and OS just isnt set-up to make the best use of them.

        Whats probably most significant about Optane is that your can write to them with low latency, a byte at a time. Radically different from SSD’s high latency, page-based approach. So while Optaneisn’t as fast as DRAM its more than fast enough for a new category of storage – The place you need to put non performance critical but persistent data, i.e. your working set. Having your entire working set in persistent memory is awesome, but i dont know any software architecture that expect to work with data that way. They will assume slow persistent storage and caching or serialization of data in memory, and will factor in the need to shuttle data between the two.

        So the future is never ‘loading’ your working set; It’s just there ready to be addressed as if it were already in RAM.

  3. So what happens to encryption technology when there is RAM that no longer “turns off”?

    1. Well, that depends on the use case, but there is no doubt that whatever the solution, you will be able to full encrypt your data like you can do today — it will be built into the hardware.

  4. Can’t wait for this to advance and become cheaper.
    We could have small PCs with APUs serving as the CPU and GPU and Intel Optane serving as both storage and RAM.

    1. My guess is that using PCIe is a transitional stage to fit in with existing architectures. When this technology was announced a while ago much was made of the fact that there would need to be new connection and bus architectures for the full potential of this technology to be realised -particularly to take advantage of the potential speed of the new memory system. We can only wait and see how that what the scene evolves

    2. PCIe is a bottleneck, but like connecting an SSD to an old SATA2 interface you still get a lot of benefits. PCIe may limit absolute bandwidth but the low latency, high endurance characteristics are still a big benefit over NAND.

      None the less, the plan is to attach XPoint via DIMM slots in the future to break the limit of PCIe.

  5. Imagine the possibilities when this technology gets cheap enough to be used in a phone. How many years will we have to wait?

    1. This would also make it possible to make computers with only RAM memory, without the need of any other storage device. Optimize a new architecture and OS for this, and it would add up to a tremendous leap in speed with a simpler architecture..

        1. The Optane drive above is good for 13.2 petabytes written, according to the specs. That already way exceeds the endurance of current SSDs, so that’s not likely to be a problem (even if you use it more like RAM, since there will be wear-leveling algorithms in place).

      1. This is intriguing. In this possible future config, I’m guessing less power drain too. Certainly, a new circuit/motherboard deisgn. Wonder, also, what it would do to graphics cards with built-in RAM. The old ‘shared memory model’ (with all the swapping, caching) takes on a whole new meaning. Exciting…

        Honestly… $1,5k for 375gbs sounds pretty decent for an intro price.

Comments are closed.