|
|
Subscribe / Log in / New account

LFCS: Preparing Linux for nonvolatile memory devices

By Jonathan Corbet
April 19, 2013
Since the demise of core memory, there has been a fundamental dichotomy in data storage technology: memory is either fast and ephemeral, or slow and persistent. The situation is changing, though, and that leads to some interesting challenges for the Linux kernel. How will we adapt to the coming world where nonvolatile memory (NVM) devices are commonplace? Ric Wheeler led a session at the 2013 Linux Foundation Collaboration Summit to discuss this issue.

In a theme that was to recur over the course of the hour, Ric noted that we have been hearing about NVM for some years. NVM devices have a number of characteristics that distinguish them from other technologies. They are byte addressable like ordinary RAM, but unlike storage devices which have always been block-oriented. They are persistent: they do not lose state when the power goes away. They are comparable to ordinary memory in speed, and also in price, so they will not be as large as hard drives anytime soon. They also are not yet available for most of us to play with at any reasonable price.

Early solid-state devices looked a lot like disks; they used normal protocols and were not so fast that the system could not keep up with them. That situation changed, though, with the next wave of devices, which were usually connected via PCI Express (PCIe). There is a lot of code in the I/O stack that [Ric Wheeler] sits between the system and the storage; as storage devices get faster, the overhead of all that code is increasingly painful. Much of that code is not useful in this situation, since it was designed for high-latency devices. As a result, Linux still can't get full performance out of bus-connected solid-state devices.

As an aside, Ric had a few suggestions to offer to anybody working to tune a Linux system to work with existing fast block devices. The relevant parameters are found under /sys/block/dev/queue, where dev is the name of the relevant block device (sda, for example). The rotational parameter is the most important; it should be set to zero for solid-state devices. The CFQ I/O scheduler (selected with the scheduler attribute) is not the best for solid-state devices; the deadline scheduler is a better choice. It is also important to pay attention to the block sizes of the underlying device and align filesystems accordingly; see this paper by Martin Petersen [PDF] for details.

Back to the topic at hand, Ric noted that, along with all the technical challenges, there are some organizational difficulties. Kernel developers tend to be quite specialized: at the storage layer, SCSI and SATA drives are handled by different groups. The block layer itself is maintained by a separate, very small group. There is yet another group for each filesystem, and we have a lot of filesystems. All of these groups will have to work together to make NVM devices work optimally on Linux systems.

Crawling first

Making the best use of NVM devices will require new programming models and new APIs. That kind of change takes time, but the hardware could be arriving soon. So, Ric said, we need to make them work as well as we can within the existing APIs; this is, he said, the "crawl phase." In this phase, NVM devices will be accessed through the same old block API, much like solid-state devices are now. The key will be to make those APIs work as quickly as possible. It is a shame, he said, but we need a block driver that will turn this cool technology into something boring. There is also a need for a lot of work to squeeze overhead out of the block I/O path.

Ted Ts'o suggested that, while it is hard to get applications to move to new APIs, it is easier to make libraries like sqlite use them. That should bring improved performance to applications with no code changes at all. It was pointed out, though, that users are often reluctant to even recompile applications, so it could still take quite a while for performance improvements to be seen by end users.

The current "crawl" status is that block drivers for NVM devices are being developed now. We're also seeing caching technologies that can use NVM devices to provide faster access to traditional storage devices. The dm-cache device mapper target was merged for 3.9, and the bcache mechanism is queued for 3.10. Ric said that various vendor-specific solutions are under development as well.

Getting to the "walk" phase involves making modifications to existing filesystems. One obvious optimization is to move filesystem journals to faster devices; frequently-used metadata can also be moved. Getting the best performance will require reworking the transaction logic to get rid of a lot of the currently-existing barriers and flush operations, though. At the moment, Btrfs has a bit of "dynamic steering" capability that is a start in that direction, but there is still a lot that needs to be done.

It is also time to start thinking about the creation of byte-level I/O APIs for new applications to use; the developers are currently looking for ideas about how applications would actually like to use NVM devices, though. Ric mentioned that the venerable mmap() interface will need to be looked at carefully and "might not be salvageable." Application developers will need to be educated on the capabilities of NVM devices, and hardware needs to be put into their hands.

That last part may prove difficult. Over the course of the session, a number of participants complained that these devices have been "just around the corner" for the last decade, but they never actually materialize. There is a bit of a credibility problem at this point. As Tejun Heo said, nothing is concrete; there is no way to know what the performance characteristics of these devices will be or how to optimize for them. The word is that this situation will change, with developers initially getting hardware under non-disclosure agreements. But, for the moment, it's hard to know what is the best way to support this class of hardware.

Eventually, Ric said, we'll arrive at the "run phase," where there will be new APIs at the device level that can be used by filesystems and storage. There will be new Linux filesystems designed just for NVM devices (in a later session, we were told that Fusion-IO had such a filesystem that would be released at some unspecified time in the future). The Storage Network Industry Association has a working group dedicated to these issues. All told, the transition will take a while and will be painful, Ric said, much like the move to 64-bit systems.

Concerns

The subsequent discussion covered a number of topics, starting with a simple question: why not just use NVM devices as RAM that doesn't forget its contents when the power goes out? One problem with doing things that way is that, while NVM may perform like RAM, other aspects — such as lifespan — may be different. Excessive writes to an NVM device may reduce its useful lifetime considerably.

There was some talk about the difficulty of getting support for new types of devices into Linux in general. The development community goes way beyond the kernel; there are many layers of projects involved in the creation of a full system. This community seems mysterious to a lot of vendors. It can take many years to get features to the point that users can actually take advantage of them. An example that was raised was parallel NFS, which has been in development for at least ten years, but we're only now getting our first enterprise support — and that is client support only.

Another point of discussion was replication of data. With ordinary block devices, replication of data across multiple devices is relatively easy. With NVM devices that are directly accessed by user space, instead, the "interception point" is gone, so there is no way for the kernel to transparently replicate data on its way to persistent storage. It was pointed out that, since applications are going to have to be changed to take advantage of NVM devices anyway, it makes sense to add replication features to the new APIs at the same time.

The issue of how trustworthy these devices are came up briefly. Applications are not accustomed to dealing with memory errors; that may have to change in the future. So the new APIs will need to include features for checksumming and error checking as well. Boaz Harrosh pointed out that, until we know what the failure characteristics of these new devices are, we will not be able to defend against them. Martin Petersen responded that the hardware interfaces to these devices are intended to be independent of the underlying technology. There are, it seems, several technologies competing for a place in the "post-flash" world; the interfaces, hopefully, will hide the differences between those technologies.

In summary, we seem to be headed toward an interesting new world, but it's still not clear what that world will look like or when it will arrive. Chances are that we will have good kernel support for NVM devices by the time they are generally available, but higher-level software may take a while to catch up and take full advantage of this new class of hardware. It should be an interesting transition.

[Your editor would like to thank the Linux Foundation for assistance with travel to the event.]

Index entries for this article
KernelMemory management/Nonvolatile memory
KernelSolid-state storage devices
ConferenceCollaboration Summit/2013


to post comments

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 19, 2013 20:36 UTC (Fri) by dowdle (subscriber, #659) [Link]

For those needing a video that dumbs down the subject so you can explain it to your less techie significant other... here's one from Fusion IO:

https://www.youtube.com/watch?v=w-_Hr5f7QHw

Fusion IO devices have been available for a couple of years now I think. I know they have visited where I work giving sales pitches... and that they are used by one or more desktop-virt-in-a-box products as the uber cache that makes the IOPs problem of desktop virtualization less painful.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 19, 2013 23:21 UTC (Fri) by dw (subscriber, #12017) [Link] (1 responses)

Here is a short paper that discusses implications of NVM on operating system design from a relatively high level: Operating System Implications of Fast, Cheap, Non-Volatile Memory (PDF)

This article is the first I've read anywhere that mentions write lifetime being a potential issue with the myriad promised new technologies. I guess the dream of unified single level store, execute-in-place, edit-in-place and suchlike might be relegated for a few more decades yet.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 20, 2013 12:42 UTC (Sat) by ricwheeler (subscriber, #4980) [Link]

Good article. I read it as advocating for nonvolatile memory - not as you read it as having a fatal flaw putting off this class of parts for decades.

mmap

Posted Apr 20, 2013 14:56 UTC (Sat) by mjw (subscriber, #16740) [Link] (5 responses)

> Ric mentioned that the venerable mmap() interface will need to be looked at carefully and "might not be salvageable."

Does anybody have a reference with some more background on why mmap() might not be salvageable?

mmap

Posted Apr 20, 2013 20:31 UTC (Sat) by intgr (subscriber, #39733) [Link] (3 responses)

I had the same question. While I trust that Ric Wheeler knows what he's talking about, the whole article (and perhaps the talk) seems much hand-waving about how they need to invent all new APIs and rewrite everything from scratch, with no argumentation.

mmap

Posted Apr 20, 2013 22:47 UTC (Sat) by ricwheeler (subscriber, #4980) [Link]

The talk was about how we will be using existing API's for the near future for pretty much every application - that is why we need to improve latency, tune existing file systems, etc.

There will be some applications that will take advantage of new API's but that is pretty rare (think of how many years it has taken to get to 64 bit applications, multi-threaded, etc :)).

mmap

Posted Apr 21, 2013 2:06 UTC (Sun) by plougher (guest, #21620) [Link]

> While I trust that Ric Wheeler knows what he's talking about, the whole article (and perhaps the talk) seems much hand-waving about how they need to invent all new APIs and rewrite everything from scratch, with no argumentation.

Getting the API right first thing is important. An API is an advertisement/contract which specifies what the subsystem can do efficiently and safely. Choose the API badly and you could be saddled with poor behaviour/supporting difficult to implement features for a long time.

An example of poor API re-use I always think about here is MTD (the Memory Technology Device subsystem covering NAND and NOR flash). MTD was introduced ~2002 as a sub-layer for the JFFS2 flash filesystem. But it also introduced user-level block device access to the underlying NAND/NOR device. This was probably mainly to allow user-level applications to write to the Out of Band data, erase sectors etc, as the block device support was semi-functional, no bad block handling, wear leveling etc. Knowledgeable users of MTD know never to mount a read-write block filesystem (i.e. ext4) via this block device, as it will quickly destroy your flash device... But it is there, and it constantly traps the unwary. In fact "Can I mount ext2 over an MTD device?" is a FAQ on MTD websites.

Beyond that of course is the instances where filesystems become trapped offering API guarantees even if they were never explicitly promised but assumed. The delayed allocation changes in ext4 that caused data loss is an obvious example.

mmap

Posted Apr 22, 2013 18:36 UTC (Mon) by daniel (guest, #3181) [Link]

It is obvious to me that high performance filesystems will be the first to take advantage of these new hardware capabilities transparently. This in no way conflicts with Ric's message. Or putting it another way, why preach to the converted? It's the app people who need to get their thinking caps on, not the usual suspects.

mmap

Posted Apr 21, 2013 0:07 UTC (Sun) by plougher (guest, #21620) [Link]

> Does anybody have a reference with some more background on why mmap() might not be salvageable?

Well an obvious observation is mmap is page oriented and NVM is byte orientated/accessible. If you're layering a filesystem on top of NVM you don't have to align the files to block boundaries or pad, but can pack much more closely... But without block alignment you loose the ability to mmap the file (you can copy to intermediate aligned buffers but this is extra overhead).

CRAMFS-XIP (compressed filesystem with execute in place extensions on NOR flash) has exactly this problem. NOR is memory addressable and thus directly mmapable. However, if you want to execute in place, you can't compress or pack the file in CRAMFS.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 21, 2013 13:59 UTC (Sun) by Lennie (subscriber, #49641) [Link] (6 responses)

The price of RAM and Flash combined probably means these devices will come on the market before NVRAM;

http://www.computerworld.com/s/article/9238105/Non_volati...

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 25, 2013 19:21 UTC (Thu) by rahvin (guest, #16953) [Link] (5 responses)

I don't recall where I saw it but the products are already in production. It's a standard DIMM module but it has a NAND chip mounted to the DIMM as well. I seem to recall that the NAND module was significantly larger such that you only needed one NAND chip for every 8 RAM chips. Thus a standard DIMM with 8 chips had a single NAND chips mounted to the back (or in the middle if thickness is a concern) of the DIMM.

It's pretty neat technology, though I wonder about the overhead of copying into and out of the NAND because there is still a difference in latency. It was my understanding that there are several different nonvolitle versions of RAM coming that have similar performance to standard RAM. I believe the ferrous magnetic stuff is already in limited production.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 25, 2013 19:31 UTC (Thu) by Lennie (subscriber, #49641) [Link] (4 responses)

As I read the story: the copying of the content of the DIMM to Flash will only happen at shutdown and read back in on poweron.

That would make it so that there is no overhead and the number of write and reads to flash are few in the life time of the module.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 25, 2013 22:35 UTC (Thu) by rahvin (guest, #16953) [Link] (3 responses)

That seems somewhat limited functionality to me. Only seems to be functional in controlled shutdown type circumstances. It would seem to make more sense to make it a little more functional than only utilize it while the power is off.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 25, 2013 22:48 UTC (Thu) by Lennie (subscriber, #49641) [Link] (2 responses)

Controlled ?

I assume that it would have a storage of electricity (like a capacitor) which means when power is lost, it will start copying the data in DIMM to Flash.

The size of the Flash is a little larger than the DIMM (to have room for failed bits in Flash).

And it would have enough electricity to completely copy the content of the DIMM to Flash.

This is similar to a battery-backed RAID-controller with a write cache. When you do a write, the data is kept in RAM of the RAID-controller and the application gets an ACK that it is stored. On powerloss it will have enough electricity in a battery to write what is in RAM to the storage-devices.

So yes, it is controlled, but it is fully handled by the product because it is self-powered and does not rely on any other component.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 26, 2013 4:27 UTC (Fri) by rahvin (guest, #16953) [Link] (1 responses)

I apparently missed the capacitor statement in the article. But that would mean battery backup for more than just the DIMM though as you need some logic to manage the copy process. I can't help but think that is a server only type of installation.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 26, 2013 17:38 UTC (Fri) by Lennie (subscriber, #49641) [Link]

The article had server in the title and ASIC in text. So yes, probably.

And the ASIC hopefully also does some wear leveling to make sure it can always write to Flash.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 22, 2013 0:41 UTC (Mon) by dowdle (subscriber, #659) [Link]

The video of the presentation was finally made public:

Collaboration Summit 2013 - Persistent Memory & Linux
https://www.youtube.com/watch?v=Ec2iu5vDjUA

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 22, 2013 7:57 UTC (Mon) by walex (guest, #69836) [Link]

There may be actually no case for redesigning current APIs, because they provide a logical view of devices that is quite independent of the underlying technology, and that independence is well woth some overhead. And most of that overhead lies in the crossing of the kernel-user protection boundaries, and not in the API per-se. See for example the rationale for the 'stdio' library.

But there are a number of aspects of the current Linux design where for what I think are nakedly "commercial" reasons some assumptions about physical device properties have been embedded in the abstraction layer implementation.

Of these the most imbecilic was the plugging/unplugging "idea" which is based on trading latency for throughput INSIDE THE PAGE CACHE which is wholly inappropriate for a device independent layer and for a number of physical devices too, in particular low latency ones (and it has some bizarre side effects too).

It may well be that for some device technologies trading latency for throughput is worthwhile, but this should be done in the device driver, and should be configurable or at least it should be possible to disable it.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 22, 2013 11:25 UTC (Mon) by etienne (guest, #25256) [Link] (6 responses)

Now that the address space of a PC is 64 bits (well 40 or 48 physical), the long term goal should be having ready to use stuff in NVM:
- The Linux kernel should be loaded there, with its "persistent" data
- Most libraries should be loaded and pre-linked there (lazy linking), with their "persistent" data
- Maybe also have some/all servers and possibly some applications.

The problem is to define "persistent" data, i.e. data which is necessary but will be re-used; a first approximation would be data statically allocated (i.e. the data segment but nothing from malloc()) - but then where do we put the stack: "top" part in NVM and "bottom" part in standard DDR, or stack in NVM with COW (Copy to DDR on write) pages?

The other problem is upgrading either the kernel or some libraries, how to unload a library (and its dependencies) when upgrading, how to get where a library has been loaded (at previous boot) both its physical and virtual address - and which version/SHA1 was it?

Maybe we should use NVM with a parallel and explicit NVMmalloc()/NVMfree() (because there is no way to magically do the right thing), instead of using a filesystem?

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 25, 2013 19:23 UTC (Thu) by rahvin (guest, #16953) [Link] (5 responses)

Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently? Industry is already demoing several new kinds of main system memory that's non-volitile. I believe the expectation is that you won't even have DDR memory, that your entire main memory will be NVM.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 26, 2013 12:14 UTC (Fri) by etienne (guest, #25256) [Link] (4 responses)

> Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently?

Even if this NVM is on the same DDR socket, they seem to say that the number of writes is limited, and the time it takes to write NVM is longer.
The processor can really write a lot of times per second to the DDR, flushing the same cache line times and times again - we do not want that penalty nor do we want some wear levelling at that point.
Note that NVM obviously do not need refresh cycles, I wonder what effect it has on performance.

Moreover, I am not sure I always want to suspend Linux instead of powering off - sometimes I want a clean slate and come back to the login screen so that applications which have been "loosing" memory for the last 10 days restart from fresh - or after an upgrade of a library, be sure that no more application still use the old version (removing a library file from the filesystem do not automatically restart users of the old library version, which still have memory mapped the old and deleted file).
For the later point, maybe a "full boot" each time is the best solution.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Apr 28, 2013 18:28 UTC (Sun) by Jonno (subscriber, #49613) [Link]

> Why are you assuming that the NVM will separate from the main memory subsystem like NAND is currently?

> Even if this NVM is on the same DDR socket, they seem to say that the number of writes is limited, and the time it takes to write NVM is longer.

There is several different types of NVM (Non-Volatile Memory), and while some have a limited number of write-cycles, others don't. Performance also vary, and several types are faster than DRAM (currently used as main memory), though to my knowledge none are quite as fast as SRAM (currently used as CPU cache).

That said, there is going to be a while before you can get anything with both good performance and unlimited write-cycles for anything resembling the price of regular DRAM, so while I expect some type of NVM will eventually be used in the main memory system, we are going to need some other system in order to use the current, imperfect, NVM types in the meantime...

LFCS: Preparing Linux for nonvolatile memory devices

Posted Oct 11, 2016 8:08 UTC (Tue) by ecloud (guest, #56624) [Link] (2 responses)

I think the ultimate point about getting userspace onboard is that we need next-generation languages that make memory leaks impossible, that maintain data structures compactly in memory (avoid linked lists and the like), and trade in the "filesystem" APIs for the appropriate object-storage APIs. (But yes, some databases are already appropriate places to start with this.) Instead of having APIs that make filesystem access completely different from memory manipulation, we need a way of marking data structures persistent. The language should then translate that into marking pages of memory persistent, and the OS should ensure that persistent pages are stored on the appropriate device. Applications should take care not to write to persistent structures more often than necessary; but otherwise either the language implementation or the OS should provide a way to cache frequently-updated persistent structures in volatile memory, and do checkpointing of changes. (Maybe marking the structure both volatile and persistent would mean that.) I guess the next issue is that sync-written structures could be temporarily out of sync with those which are cached; then either it means all writes need to be to cached first and then flushed to NVM at the next checkpoint, or else the system needs to be power-failure-proof (not a problem for battery-powered devices; line-powered machines can have at least a capacitor-based UPS sufficient that all writes can be completed before power fails).

So, rebooting, or even restarting applications, should become exceedingly rare. It places great demands on all software to be as reliable as the kernel itself: keep running for years with no leaks, no overflows, no bugs of the kind that require restarting the software as a workaround. You couldn't truly restart the application without losing all its stored data too. Using filesystems has made it harder to write software (so much persistence-related code that has to be written), but also allowed us to be too lazy for too long about reliability of the in-memory operations. If we invest as much effort into keeping memory beautifully organized as we have invested into file-based persistence, maybe we can get there?

I doubt that Linux will be the leader here, but there must be some current university research project by now? Anybody know of one? A long time ago there was KeyKOS which had checkpointing-based persistence; then there was Eros, but its focus shifted more strongly to capability-based security than on checkpointing. (And Linux still doesn't have such advanced capability-based security, either. This is why Sandstorm exists: the OS doesn't do it, so you have to rely on containers and management of them to isolate processes from each other.)

So now we have NVMe devices, like the M.2 flash drives. Can they be configured as memory-mapped, without using mmap()? Because using mmap() implies that all reads and writes will be cached in volatile RAM, right? If the hardware allows us to have RAM for one range of addresses and flash for another range, this work could begin.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Oct 13, 2016 12:46 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

Fundamentally it seems to me that we'll still want something like a filesystem: a collection of named, possibly hierarchically or otherwise structured blocks of data that can be used by multiple programs without regard for which program created them. Just arranging for programs to keep their data structures around forever doesn't do that: each program would have to implement some sort of organizational principle and if they're not all using the same one this is tantamount to every program having its own implementation of half a filesystem, non-interopable with any of the others or with external tooling. This seems drastically worse than what we have now.

Persistent memory is nice because it might mean that e.g. you could shut down a machine with a long-running computation on it and have it just restart again. Of course, with the CPU caches not persistent, it might have to go back a few seconds to a checkpoint. You can often do that *now* by just checkpointing to disk every few seconds, but with persistent storage you can presumably do that even if there are gigabytes of state (assuming that the persistent memory of discourse doesn't wear out on writes the way flash does).

But persistent memory will not allow us to do away with filesystems: neither the API nor the allocation layer. The fundamental role of the filesystem API -- naming things and letting users, and disparate programs, access them -- will still be needed, and cannot be replaced by object storage any more than you can replace a filesystem with an inode table and tell users to just access everything by inode number. Equally, the role of filesystems themselves -- the object allocation layer -- is still there: It's just a filesystem for persistent storage, with differently strange tradeoffs than every other filesystem's differently strange tradeoffs. Even having files with no name is not new: unlinked files have given us that for decades, and more recently open(..., O_TMPFILE) has too.

LFCS: Preparing Linux for nonvolatile memory devices

Posted Oct 13, 2016 14:32 UTC (Thu) by raven667 (subscriber, #5198) [Link]

> every program having its own implementation of half a filesystem, non-interopable with any of the others

Probably not every program, but every major language family that doesn't share low level compatibility of its data structures, like how today having a C API is a lowest common denominator for a languages compatibility with other languages. Or how JSON has become a medium of exchange for network software.

> naming things and letting users, and disparate programs, access them

With the popularity of application sandboxing, with Flatpak on the desktop and Docker on the server, there are far more defined and regimented ways for applications to share data, so I don't expect arbitrary disparate programs accessing data to be supported in this model.

Fairness versus performance

Posted Apr 22, 2013 15:53 UTC (Mon) by arjan (subscriber, #36785) [Link]

If you have a very high speed IO device... quite often you care more about fairness (and maybe even bandwidth allocation) between tasks/cgroups/whatever than pure raw performance.

CFQ at least tries to do something there... deadline and co not so much.

not saying CFQ is the be all end all of IO schedulers, but only looking at throughput or latency is clearly not the whole story.

storage and emulation

Posted Apr 22, 2013 19:06 UTC (Mon) by ndye (guest, #9947) [Link]

. . . to work together to make NVM devices work optimally on Linux systems

A couple spots where I don't catch on:

  • Is this planning only for storage devices (files with a path), rather than the malloc'd working storage of either OS or application?
  • How might we emulate this in QEMU, VirtualBox, etc. for tracing and benchmarking?


Copyright © 2013, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds