LWN: Comments on "LFCS: Preparing Linux for nonvolatile memory devices" https://lwn.net/Articles/547903/ This is a special feed containing comments posted to the individual LWN article titled "LFCS: Preparing Linux for nonvolatile memory devices". en-us Sat, 20 Sep 2025 08:13:01 +0000 Sat, 20 Sep 2025 08:13:01 +0000 https://www.rssboard.org/rss-specification [email protected] LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/703417/ https://lwn.net/Articles/703417/ raven667 <div class="FormattedComment"> <font class="QuotedText">&gt; every program having its own implementation of half a filesystem, non-interopable with any of the others</font><br> <p> Probably not every program, but every major language family that doesn't share low level compatibility of its data structures, like how today having a C API is a lowest common denominator for a languages compatibility with other languages. Or how JSON has become a medium of exchange for network software.<br> <p> <font class="QuotedText">&gt; naming things and letting users, and disparate programs, access them</font><br> <p> With the popularity of application sandboxing, with Flatpak on the desktop and Docker on the server, there are far more defined and regimented ways for applications to share data, so I don't expect arbitrary disparate programs accessing data to be supported in this model.<br> </div> Thu, 13 Oct 2016 14:32:02 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/703409/ https://lwn.net/Articles/703409/ nix <div class="FormattedComment"> Fundamentally it seems to me that we'll still want something like a filesystem: a collection of named, possibly hierarchically or otherwise structured blocks of data that can be used by multiple programs without regard for which program created them. Just arranging for programs to keep their data structures around forever doesn't do that: each program would have to implement some sort of organizational principle and if they're not all using the same one this is tantamount to every program having its own implementation of half a filesystem, non-interopable with any of the others or with external tooling. This seems drastically worse than what we have now.<br> <p> Persistent memory is nice because it might mean that e.g. you could shut down a machine with a long-running computation on it and have it just restart again. Of course, with the CPU caches not persistent, it might have to go back a few seconds to a checkpoint. You can often do that *now* by just checkpointing to disk every few seconds, but with persistent storage you can presumably do that even if there are gigabytes of state (assuming that the persistent memory of discourse doesn't wear out on writes the way flash does).<br> <p> But persistent memory will not allow us to do away with filesystems: neither the API nor the allocation layer. The fundamental role of the filesystem API -- naming things and letting users, and disparate programs, access them -- will still be needed, and cannot be replaced by object storage any more than you can replace a filesystem with an inode table and tell users to just access everything by inode number. Equally, the role of filesystems themselves -- the object allocation layer -- is still there: It's just a filesystem for persistent storage, with differently strange tradeoffs than every other filesystem's differently strange tradeoffs. Even having files with no name is not new: unlinked files have given us that for decades, and more recently open(..., O_TMPFILE) has too.<br> <p> </div> Thu, 13 Oct 2016 12:46:15 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/703184/ https://lwn.net/Articles/703184/ ecloud <div class="FormattedComment"> I think the ultimate point about getting userspace onboard is that we need next-generation languages that make memory leaks impossible, that maintain data structures compactly in memory (avoid linked lists and the like), and trade in the "filesystem" APIs for the appropriate object-storage APIs. (But yes, some databases are already appropriate places to start with this.) Instead of having APIs that make filesystem access completely different from memory manipulation, we need a way of marking data structures persistent. The language should then translate that into marking pages of memory persistent, and the OS should ensure that persistent pages are stored on the appropriate device. Applications should take care not to write to persistent structures more often than necessary; but otherwise either the language implementation or the OS should provide a way to cache frequently-updated persistent structures in volatile memory, and do checkpointing of changes. (Maybe marking the structure both volatile and persistent would mean that.) I guess the next issue is that sync-written structures could be temporarily out of sync with those which are cached; then either it means all writes need to be to cached first and then flushed to NVM at the next checkpoint, or else the system needs to be power-failure-proof (not a problem for battery-powered devices; line-powered machines can have at least a capacitor-based UPS sufficient that all writes can be completed before power fails).<br> <p> So, rebooting, or even restarting applications, should become exceedingly rare. It places great demands on all software to be as reliable as the kernel itself: keep running for years with no leaks, no overflows, no bugs of the kind that require restarting the software as a workaround. You couldn't truly restart the application without losing all its stored data too. Using filesystems has made it harder to write software (so much persistence-related code that has to be written), but also allowed us to be too lazy for too long about reliability of the in-memory operations. If we invest as much effort into keeping memory beautifully organized as we have invested into file-based persistence, maybe we can get there?<br> <p> I doubt that Linux will be the leader here, but there must be some current university research project by now? Anybody know of one? A long time ago there was KeyKOS which had checkpointing-based persistence; then there was Eros, but its focus shifted more strongly to capability-based security than on checkpointing. (And Linux still doesn't have such advanced capability-based security, either. This is why Sandstorm exists: the OS doesn't do it, so you have to rely on containers and management of them to isolate processes from each other.)<br> <p> So now we have NVMe devices, like the M.2 flash drives. Can they be configured as memory-mapped, without using mmap()? Because using mmap() implies that all reads and writes will be cached in volatile RAM, right? If the hardware allows us to have RAM for one range of addresses and flash for another range, this work could begin.<br> </div> Tue, 11 Oct 2016 08:08:25 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548777/ https://lwn.net/Articles/548777/ Jonno <div class="FormattedComment"> <font class="QuotedText">&gt; Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently?</font><br> <p> <font class="QuotedText">&gt; Even if this NVM is on the same DDR socket, they seem to say that the number of writes is limited, and the time it takes to write NVM is longer.</font><br> <p> There is several different types of NVM (Non-Volatile Memory), and while some have a limited number of write-cycles, others don't. Performance also vary, and several types are faster than DRAM (currently used as main memory), though to my knowledge none are quite as fast as SRAM (currently used as CPU cache). <br> <p> That said, there is going to be a while before you can get anything with both good performance and unlimited write-cycles for anything resembling the price of regular DRAM, so while I expect some type of NVM will eventually be used in the main memory system, we are going to need some other system in order to use the current, imperfect, NVM types in the meantime...<br> </div> Sun, 28 Apr 2013 18:28:27 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548671/ https://lwn.net/Articles/548671/ Lennie <div class="FormattedComment"> The article had server in the title and ASIC in text. So yes, probably.<br> <p> And the ASIC hopefully also does some wear leveling to make sure it can always write to Flash.<br> </div> Fri, 26 Apr 2013 17:38:17 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548626/ https://lwn.net/Articles/548626/ etienne <div class="FormattedComment"> <font class="QuotedText">&gt; Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently?</font><br> <p> Even if this NVM is on the same DDR socket, they seem to say that the number of writes is limited, and the time it takes to write NVM is longer.<br> The processor can really write a lot of times per second to the DDR, flushing the same cache line times and times again - we do not want that penalty nor do we want some wear levelling at that point.<br> Note that NVM obviously do not need refresh cycles, I wonder what effect it has on performance.<br> <p> Moreover, I am not sure I always want to suspend Linux instead of powering off - sometimes I want a clean slate and come back to the login screen so that applications which have been "loosing" memory for the last 10 days restart from fresh - or after an upgrade of a library, be sure that no more application still use the old version (removing a library file from the filesystem do not automatically restart users of the old library version, which still have memory mapped the old and deleted file).<br> For the later point, maybe a "full boot" each time is the best solution.<br> </div> Fri, 26 Apr 2013 12:14:06 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548597/ https://lwn.net/Articles/548597/ rahvin <div class="FormattedComment"> I apparently missed the capacitor statement in the article. But that would mean battery backup for more than just the DIMM though as you need some logic to manage the copy process. I can't help but think that is a server only type of installation. <br> </div> Fri, 26 Apr 2013 04:27:04 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548567/ https://lwn.net/Articles/548567/ Lennie <div class="FormattedComment"> Controlled ?<br> <p> I assume that it would have a storage of electricity (like a capacitor) which means when power is lost, it will start copying the data in DIMM to Flash.<br> <p> The size of the Flash is a little larger than the DIMM (to have room for failed bits in Flash).<br> <p> And it would have enough electricity to completely copy the content of the DIMM to Flash.<br> <p> This is similar to a battery-backed RAID-controller with a write cache. When you do a write, the data is kept in RAM of the RAID-controller and the application gets an ACK that it is stored. On powerloss it will have enough electricity in a battery to write what is in RAM to the storage-devices.<br> <p> So yes, it is controlled, but it is fully handled by the product because it is self-powered and does not rely on any other component.<br> </div> Thu, 25 Apr 2013 22:48:23 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548566/ https://lwn.net/Articles/548566/ rahvin <div class="FormattedComment"> That seems somewhat limited functionality to me. Only seems to be functional in controlled shutdown type circumstances. It would seem to make more sense to make it a little more functional than only utilize it while the power is off. <br> </div> Thu, 25 Apr 2013 22:35:21 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548534/ https://lwn.net/Articles/548534/ Lennie <div class="FormattedComment"> As I read the story: the copying of the content of the DIMM to Flash will only happen at shutdown and read back in on poweron.<br> <p> That would make it so that there is no overhead and the number of write and reads to flash are few in the life time of the module.<br> </div> Thu, 25 Apr 2013 19:31:29 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548533/ https://lwn.net/Articles/548533/ rahvin <div class="FormattedComment"> Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently? Industry is already demoing several new kinds of main system memory that's non-volitile. I believe the expectation is that you won't even have DDR memory, that your entire main memory will be NVM. <br> </div> Thu, 25 Apr 2013 19:23:56 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548531/ https://lwn.net/Articles/548531/ rahvin <div class="FormattedComment"> I don't recall where I saw it but the products are already in production. It's a standard DIMM module but it has a NAND chip mounted to the DIMM as well. I seem to recall that the NAND module was significantly larger such that you only needed one NAND chip for every 8 RAM chips. Thus a standard DIMM with 8 chips had a single NAND chips mounted to the back (or in the middle if thickness is a concern) of the DIMM. <br> <p> It's pretty neat technology, though I wonder about the overhead of copying into and out of the NAND because there is still a difference in latency. It was my understanding that there are several different nonvolitle versions of RAM coming that have similar performance to standard RAM. I believe the ferrous magnetic stuff is already in limited production. <br> </div> Thu, 25 Apr 2013 19:21:43 +0000 storage and emulation https://lwn.net/Articles/548103/ https://lwn.net/Articles/548103/ ndye <blockquote> <i>. . . to work together to make NVM devices work optimally on Linux systems </i> </blockquote> <p> A couple spots where I don't catch on: <ul> <li>Is this planning only for storage devices (files with a path), rather than the malloc'd working storage of either OS or application? </li> <li>How might we emulate this in QEMU, VirtualBox, etc. for tracing and benchmarking? </li> </ul> </p> Mon, 22 Apr 2013 19:06:07 +0000 mmap https://lwn.net/Articles/548101/ https://lwn.net/Articles/548101/ daniel <div class="FormattedComment"> It is obvious to me that high performance filesystems will be the first to take advantage of these new hardware capabilities transparently. This in no way conflicts with Ric's message. Or putting it another way, why preach to the converted? It's the app people who need to get their thinking caps on, not the usual suspects.<br> </div> Mon, 22 Apr 2013 18:36:29 +0000 Fairness versus performance https://lwn.net/Articles/548090/ https://lwn.net/Articles/548090/ arjan <div class="FormattedComment"> If you have a very high speed IO device... quite often you care more about fairness (and maybe even bandwidth allocation) between tasks/cgroups/whatever than pure raw performance.<br> <p> CFQ at least tries to do something there... deadline and co not so much.<br> <p> not saying CFQ is the be all end all of IO schedulers, but only looking at throughput or latency is clearly not the whole story.<br> </div> Mon, 22 Apr 2013 15:53:18 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548063/ https://lwn.net/Articles/548063/ etienne <div class="FormattedComment"> Now that the address space of a PC is 64 bits (well 40 or 48 physical), the long term goal should be having ready to use stuff in NVM:<br> - The Linux kernel should be loaded there, with its "persistent" data<br> - Most libraries should be loaded and pre-linked there (lazy linking), with their "persistent" data<br> - Maybe also have some/all servers and possibly some applications.<br> <p> The problem is to define "persistent" data, i.e. data which is necessary but will be re-used; a first approximation would be data statically allocated (i.e. the data segment but nothing from malloc()) - but then where do we put the stack: "top" part in NVM and "bottom" part in standard DDR, or stack in NVM with COW (Copy to DDR on write) pages?<br> <p> The other problem is upgrading either the kernel or some libraries, how to unload a library (and its dependencies) when upgrading, how to get where a library has been loaded (at previous boot) both its physical and virtual address - and which version/SHA1 was it?<br> <p> Maybe we should use NVM with a parallel and explicit NVMmalloc()/NVMfree() (because there is no way to magically do the right thing), instead of using a filesystem?<br> </div> Mon, 22 Apr 2013 11:25:06 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548060/ https://lwn.net/Articles/548060/ walex <div class="FormattedComment"> There may be actually no case for redesigning current APIs, because they provide a logical view of devices that is quite independent of the underlying technology, and that independence is well woth some overhead. And most of that overhead lies in the crossing of the kernel-user protection boundaries, and not in the API per-se. See for example the rationale for the 'stdio' library.<br> <p> But there are a number of aspects of the current Linux design where for what I think are nakedly "commercial" reasons some assumptions about physical device properties have been embedded in the abstraction layer implementation.<br> <p> Of these the most imbecilic was the plugging/unplugging "idea" which is based on trading latency for throughput INSIDE THE PAGE CACHE which is wholly inappropriate for a device independent layer and for a number of physical devices too, in particular low latency ones (and it has some bizarre side effects too).<br> <p> It may well be that for some device technologies trading latency for throughput is worthwhile, but this should be done in the device driver, and should be configurable or at least it should be possible to disable it.<br> </div> Mon, 22 Apr 2013 07:57:45 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548050/ https://lwn.net/Articles/548050/ dowdle <div class="FormattedComment"> The video of the presentation was finally made public:<br> <p> Collaboration Summit 2013 - Persistent Memory &amp; Linux <br> <a href="https://www.youtube.com/watch?v=Ec2iu5vDjUA">https://www.youtube.com/watch?v=Ec2iu5vDjUA</a><br> </div> Mon, 22 Apr 2013 00:41:08 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/548023/ https://lwn.net/Articles/548023/ Lennie <div class="FormattedComment"> The price of RAM and Flash combined probably means these devices will come on the market before NVRAM;<br> <p> <a href="http://www.computerworld.com/s/article/9238105/Non_volatile_DIMM_cards_coming_soon_to_a_server_and_array_near_you">http://www.computerworld.com/s/article/9238105/Non_volati...</a><br> <p> </div> Sun, 21 Apr 2013 13:59:42 +0000 mmap https://lwn.net/Articles/548004/ https://lwn.net/Articles/548004/ plougher <div class="FormattedComment"> <font class="QuotedText">&gt; While I trust that Ric Wheeler knows what he's talking about, the whole article (and perhaps the talk) seems much hand-waving about how they need to invent all new APIs and rewrite everything from scratch, with no argumentation.</font><br> <p> Getting the API right first thing is important. An API is an advertisement/contract which specifies what the subsystem can do efficiently and safely. Choose the API badly and you could be saddled with poor behaviour/supporting difficult to implement features for a long time.<br> <p> An example of poor API re-use I always think about here is MTD (the Memory Technology Device subsystem covering NAND and NOR flash). MTD was introduced ~2002 as a sub-layer for the JFFS2 flash filesystem. But it also introduced user-level block device access to the underlying NAND/NOR device. This was probably mainly to allow user-level applications to write to the Out of Band data, erase sectors etc, as the block device support was semi-functional, no bad block handling, wear leveling etc. Knowledgeable users of MTD know never to mount a read-write block filesystem (i.e. ext4) via this block device, as it will quickly destroy your flash device... But it is there, and it constantly traps the unwary. In fact "Can I mount ext2 over an MTD device?" is a FAQ on MTD websites.<br> <p> Beyond that of course is the instances where filesystems become trapped offering API guarantees even if they were never explicitly promised but assumed. The delayed allocation changes in ext4 that caused data loss is an obvious example.<br> <p> </div> Sun, 21 Apr 2013 02:06:28 +0000 mmap https://lwn.net/Articles/548000/ https://lwn.net/Articles/548000/ plougher <div class="FormattedComment"> <font class="QuotedText">&gt; Does anybody have a reference with some more background on why mmap() might not be salvageable?</font><br> <p> Well an obvious observation is mmap is page oriented and NVM is byte orientated/accessible. If you're layering a filesystem on top of NVM you don't have to align the files to block boundaries or pad, but can pack much more closely... But without block alignment you loose the ability to mmap the file (you can copy to intermediate aligned buffers but this is extra overhead).<br> <p> CRAMFS-XIP (compressed filesystem with execute in place extensions on NOR flash) has exactly this problem. NOR is memory addressable and thus directly mmapable. However, if you want to execute in place, you can't compress or pack the file in CRAMFS.<br> <p> </div> Sun, 21 Apr 2013 00:07:53 +0000 mmap https://lwn.net/Articles/547999/ https://lwn.net/Articles/547999/ ricwheeler <div class="FormattedComment"> <p> The talk was about how we will be using existing API's for the near future for pretty much every application - that is why we need to improve latency, tune existing file systems, etc.<br> <p> There will be some applications that will take advantage of new API's but that is pretty rare (think of how many years it has taken to get to 64 bit applications, multi-threaded, etc :)).<br> <p> <p> </div> Sat, 20 Apr 2013 22:47:09 +0000 mmap https://lwn.net/Articles/547991/ https://lwn.net/Articles/547991/ intgr <div class="FormattedComment"> I had the same question. While I trust that Ric Wheeler knows what he's talking about, the whole article (and perhaps the talk) seems much hand-waving about how they need to invent all new APIs and rewrite everything from scratch, with no argumentation.<br> <p> </div> Sat, 20 Apr 2013 20:31:30 +0000 mmap https://lwn.net/Articles/547977/ https://lwn.net/Articles/547977/ mjw <div class="FormattedComment"> <font class="QuotedText">&gt; Ric mentioned that the venerable mmap() interface will need to be looked at carefully and "might not be salvageable."</font><br> <p> Does anybody have a reference with some more background on why mmap() might not be salvageable?<br> </div> Sat, 20 Apr 2013 14:56:29 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/547969/ https://lwn.net/Articles/547969/ ricwheeler <div class="FormattedComment"> Good article. I read it as advocating for nonvolatile memory - not as you read it as having a fatal flaw putting off this class of parts for decades. <br> <p> </div> Sat, 20 Apr 2013 12:42:19 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/547957/ https://lwn.net/Articles/547957/ dw Here is a short paper that discusses implications of NVM on operating system design from a relatively high level: <a href="http://homes.cs.washington.edu/~luisceze/publications/novos-hotos2011.pdf">Operating System Implications of Fast, Cheap, Non-Volatile Memory</a> (PDF) <p> This article is the first I've read anywhere that mentions write lifetime being a potential issue with the myriad promised new technologies. I guess the dream of unified single level store, execute-in-place, edit-in-place and suchlike might be relegated for a few more decades yet. Fri, 19 Apr 2013 23:21:26 +0000 LFCS: Preparing Linux for nonvolatile memory devices https://lwn.net/Articles/547949/ https://lwn.net/Articles/547949/ dowdle <div class="FormattedComment"> For those needing a video that dumbs down the subject so you can explain it to your less techie significant other... here's one from Fusion IO:<br> <p> <a href="https://www.youtube.com/watch?v=w-_Hr5f7QHw">https://www.youtube.com/watch?v=w-_Hr5f7QHw</a><br> <p> Fusion IO devices have been available for a couple of years now I think. I know they have visited where I work giving sales pitches... and that they are used by one or more desktop-virt-in-a-box products as the uber cache that makes the IOPs problem of desktop virtualization less painful.<br> </div> Fri, 19 Apr 2013 20:36:09 +0000