Non-Volatile RAM
Non-Volatile RAM
Hi,
This is pure curiosity...
If computers had about 512 GiB of non-volatile RAM (and no normal/volatile RAM) and you were designing an OS specifically to take advantage of this fact; what would you do differently to existing OSs?
What if the computer has 2 or more OS's installed (dual boot)?
Cheers,
Brendan
This is pure curiosity...
If computers had about 512 GiB of non-volatile RAM (and no normal/volatile RAM) and you were designing an OS specifically to take advantage of this fact; what would you do differently to existing OSs?
What if the computer has 2 or more OS's installed (dual boot)?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Non-Volatile RAM
It will be good candidate for a things that is always on, like phones and machines. Servers is not considered due to slow ram speed and wear off issues.
One obvious advantage is that you can have flexible and quick(or instant) power cycle and resilient to power lost. But due to technical limitation (which may be resolved in future), I may aggressively reduce the memory usage.
But why stop our imagination? How about a weird system that also have fast RAM (or few GBs of L3/L4 cache), which you can put the temporary computation contexts on volatile RAM/cache, and have facility to commits onto non-volatile RAM? (this should be handled anyway unless you also have non-volatile cpu cache). This should be funny to explore.
One obvious advantage is that you can have flexible and quick(or instant) power cycle and resilient to power lost. But due to technical limitation (which may be resolved in future), I may aggressively reduce the memory usage.
But why stop our imagination? How about a weird system that also have fast RAM (or few GBs of L3/L4 cache), which you can put the temporary computation contexts on volatile RAM/cache, and have facility to commits onto non-volatile RAM? (this should be handled anyway unless you also have non-volatile cpu cache). This should be funny to explore.
Re: Non-Volatile RAM
I have been thinking this once in a while. For a simplified example, we could have a device with a framebuffer-like memory area that is non-volatile. An excellent playground! In general, it seems that modern OSs with hibernation etc. are going to the direction of having non-volatile RAM. Of course, the way of achieving it is a kind of hacky.
Re: Non-Volatile RAM
Hi,
Some background (why I'm curious)...
A while ago HP was saying (what everyone thought was) a bunch of impractical hype about plans for a future system they're calling "The Machine". They're claiming things like unifying storage (e.g. no difference between RAM and file system) and "near instant" boot times, and massive amounts of persistent RAM (that's as fast as normal RAM), and optical interconnects, and hundreds of CPUs, and "discarding a computing model that has stood unchallenged for sixty years".
Recently Intel announced non-volatile RAM that's "1000 times" faster than NAND (and about as fast as DDR4), doesn't have problems with wear, is also affordable (not expensive to manufacture), and is ready to go into production "soon" (not some research thing we won't see for years).
Then information was leaked about Intel's upcoming Skylake that says (if you believe the leaked information is authentic) it's supporting persistent RAM and 4 times the RAM capacity as existing servers (and 100G Omnipath interconnects).
Now I'm starting to think that in 5 years time all RAM will be non-volatile (and maybe HP's crazy hype wasn't crazy or hype).
Cheers,
Brendan
Some background (why I'm curious)...
A while ago HP was saying (what everyone thought was) a bunch of impractical hype about plans for a future system they're calling "The Machine". They're claiming things like unifying storage (e.g. no difference between RAM and file system) and "near instant" boot times, and massive amounts of persistent RAM (that's as fast as normal RAM), and optical interconnects, and hundreds of CPUs, and "discarding a computing model that has stood unchallenged for sixty years".
Recently Intel announced non-volatile RAM that's "1000 times" faster than NAND (and about as fast as DDR4), doesn't have problems with wear, is also affordable (not expensive to manufacture), and is ready to go into production "soon" (not some research thing we won't see for years).
Then information was leaked about Intel's upcoming Skylake that says (if you believe the leaked information is authentic) it's supporting persistent RAM and 4 times the RAM capacity as existing servers (and 100G Omnipath interconnects).
Now I'm starting to think that in 5 years time all RAM will be non-volatile (and maybe HP's crazy hype wasn't crazy or hype).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Non-Volatile RAM
There's a lot of literature in the area. http://static.usenix.org/event/hotos11/ ... Bailey.pdf is a decent, short, paper that discusses about some of the implications.
- Kazinsal
- Member
- Posts: 559
- Joined: Wed Jul 13, 2011 7:38 pm
- Libera.chat IRC: Kazinsal
- Location: Vancouver
- Contact:
Re: Non-Volatile RAM
I'm thinking we see this new product roll out in the million-dollar-setup enterprise market first, so apart from terrifyingly slow emulation (or loading the whole system image into conventional RAM and dealing with the inevitable impossible behaviour when a crash occurs and data is lost), none of us are going to be able to develop for it until it's too late to get on the bandwagon.
We already have OSes that have suspend to RAM -> disk. In most circumstances it works great. Unfortunately with current RAM/disk separation, over time, consumer operating environments suffer from continuous uptime (in my experience) and the restore process is incredibly slow. If we design around a single persistent main memory (like a cartridge-based console with a cart whose primary bank is made entirely of NVRAM, except several orders of magnitude larger) we can likely negate many of these problems.
More unfortunate is that the quick and dirty solution to this will likely be implemented: Split it into n GiB of main RAM and 512-n GiB of storage. On the plus side, since it's all in the same physical address space you can much more efficiently page in more main RAM from the storage section (page fault handlers will be much faster! )
We already have OSes that have suspend to RAM -> disk. In most circumstances it works great. Unfortunately with current RAM/disk separation, over time, consumer operating environments suffer from continuous uptime (in my experience) and the restore process is incredibly slow. If we design around a single persistent main memory (like a cartridge-based console with a cart whose primary bank is made entirely of NVRAM, except several orders of magnitude larger) we can likely negate many of these problems.
More unfortunate is that the quick and dirty solution to this will likely be implemented: Split it into n GiB of main RAM and 512-n GiB of storage. On the plus side, since it's all in the same physical address space you can much more efficiently page in more main RAM from the storage section (page fault handlers will be much faster! )
Re: Non-Volatile RAM
I think massive amounts of non-volatile, DRAM-speed memory will mostly serve as an optimization, rather than a paradigm shifter.
In the near term it will probably still be on a separate device, accessed through a mass-storage interface, because the hardware interfaces for RAM and disk are optimized for different things. This might go away eventually, but it could alternatively lead to a design where more memory is localized to different parts of the machine to reduce latency (i.e. more extreme NUMA).
Assuming we do end up with one giant, non-volatile, high-speed physical address space, there will still be a need to separate long-term memory from semantically-volatile memory, to handle software updates, repair corruption, or at least for versioning. One thing this might make easier is more ubiquitous history tracking, but that's also already been done without it (VM and FS checkpointing, Apple's new save-replaced-with-undo system). And for that matter, so has "one giant non-volatile high-speed physical address space," using things like mmap.
However, on the optimization side of things, it will be great. Less hardware to manufacture, so computers can be cheaper. Less power used to refresh DRAM or TRIM flash storage, so batteries will last longer. No need for things like hibernation, CPU sleep will already use no power. Much higher capacity, so no need to swap (this is already here in higher-end machines). Finer-grained addressing can make file systems more efficient.
Everything else is orthogonal, as far as I can tell. Using different permission systems than Unix-style ACLs works fine with the RAM/disk split. Single-address-space OSes still require a different type of memory protection and still work with volatile RAM. Shipping applications as checkpointed running processes has already been done with Smalltalk and VM appliances. Things like network devices can already wake systems up from sleep, non-volatile systems would just make it more efficient.
In the near term it will probably still be on a separate device, accessed through a mass-storage interface, because the hardware interfaces for RAM and disk are optimized for different things. This might go away eventually, but it could alternatively lead to a design where more memory is localized to different parts of the machine to reduce latency (i.e. more extreme NUMA).
Assuming we do end up with one giant, non-volatile, high-speed physical address space, there will still be a need to separate long-term memory from semantically-volatile memory, to handle software updates, repair corruption, or at least for versioning. One thing this might make easier is more ubiquitous history tracking, but that's also already been done without it (VM and FS checkpointing, Apple's new save-replaced-with-undo system). And for that matter, so has "one giant non-volatile high-speed physical address space," using things like mmap.
However, on the optimization side of things, it will be great. Less hardware to manufacture, so computers can be cheaper. Less power used to refresh DRAM or TRIM flash storage, so batteries will last longer. No need for things like hibernation, CPU sleep will already use no power. Much higher capacity, so no need to swap (this is already here in higher-end machines). Finer-grained addressing can make file systems more efficient.
Everything else is orthogonal, as far as I can tell. Using different permission systems than Unix-style ACLs works fine with the RAM/disk split. Single-address-space OSes still require a different type of memory protection and still work with volatile RAM. Shipping applications as checkpointed running processes has already been done with Smalltalk and VM appliances. Things like network devices can already wake systems up from sleep, non-volatile systems would just make it more efficient.
Re: Non-Volatile RAM
One problem that I can see is that moving to a complete nvram system prohibits you from resetting your system to a known state.
There is a reason why Windows still makes you reboot the machine when installing software or drivers. It's because changing the system configuration while it's running introduces virtually unlimited opportunities for unexpected behavior to a stable system.
Most applications are thoroughly tested on many different hardware configurations, but rarely are they tested on a system with a configuration that is changing while the software is running. The sheer infinite possibilities in this type of situation makes this sort of testing completely unfeasible.
Now, let's assume that you have an "always on" device, with nonvolatile memory only, and you can start to see how the longer the system runs, the grater the chance for unexpected behavior due to conflicts between components in an ever-changing system. Either the entire OS must be re-imagined, or the current concept of installing, running, shutting down and rebooting a machine to a known, stable state must be somehow "emulated".
I'm not sure what a non-volatile OS would look like, but the ability to reset the machine to a known state at regular intervals will probably be critical to the stability of the system.
Even modern OSes become less and less stable with every configuration change. Imagine how much worse it would be if turning off the machine didn't clear system RAM...
There is a reason why Windows still makes you reboot the machine when installing software or drivers. It's because changing the system configuration while it's running introduces virtually unlimited opportunities for unexpected behavior to a stable system.
Most applications are thoroughly tested on many different hardware configurations, but rarely are they tested on a system with a configuration that is changing while the software is running. The sheer infinite possibilities in this type of situation makes this sort of testing completely unfeasible.
Now, let's assume that you have an "always on" device, with nonvolatile memory only, and you can start to see how the longer the system runs, the grater the chance for unexpected behavior due to conflicts between components in an ever-changing system. Either the entire OS must be re-imagined, or the current concept of installing, running, shutting down and rebooting a machine to a known, stable state must be somehow "emulated".
I'm not sure what a non-volatile OS would look like, but the ability to reset the machine to a known state at regular intervals will probably be critical to the stability of the system.
Even modern OSes become less and less stable with every configuration change. Imagine how much worse it would be if turning off the machine didn't clear system RAM...
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: Non-Volatile RAM
I don't think this will be done with a static n. It seems to make more sense to have some kind of a tmpfs (or multiple of them) of dynamic size for the storage part.Kazinsal wrote:More unfortunate is that the quick and dirty solution to this will likely be implemented: Split it into n GiB of main RAM and 512-n GiB of storage.
And then it's not quick and dirty any more, but just using a file system to manage part of the memory. In theory, you could do without a file system even today with normal hard disks and just have an interface like for RAM, where applications can allocate and free ranges of sectors (= pages). It's not usually done, though, and the reason is that file systems are useful and make sense. That won't change with non-volatile RAM.
Re: Non-Volatile RAM
This can be easily solved by a firmware "hard-reset" feature that clear the RAM (also for security in multi-boot) or pass a flag (to denote fresh boot or resume) into the boot loader.SpyderTL wrote:One problem that I can see is that moving to a complete nvram system prohibits you from resetting your system to a known state.
Re: Non-Volatile RAM
If the user can decide to "clear" the nvram arbitrarily (which is pretty much a given, I would think), then the OS has to handle this situation, gracefully. In order to do this, it must be sure to NOT store any critical information only in nvram, and therefore it must make a copy of every critical data element on a different nvram device -- one that can't easily be erased, like a hard drive.bluemoon wrote:This can be easily solved by a firmware "hard-reset" feature that clear the RAM (also for security in multi-boot) or pass a flag (to denote fresh boot or resume) into the boot loader.SpyderTL wrote:One problem that I can see is that moving to a complete nvram system prohibits you from resetting your system to a known state.
Once this is done, the "benefit" of using nvram instead of volatile ram is almost completely lost. The end result would be something very similar to putting your PC to sleep today. So, given these restrictions, it's probably not even worth redesigning the entire system for little to no benefit.
But I guess we'll find out.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Re: Non-Volatile RAM
First problem is the cost. If the cost is much more than the cost of hard disk storage, then there will be no way for such memory to replace the HDD.
Second problem is the possible environment change between power off and on. Some code should be executed to prevent a crash after remote network client has closed connection or after a user just changed his video card while the PC was off.
Next problem is the processor. If a computer is switched off, then the processor loses it's state, so the program it was executing now is unable to continue after the power is on.
And finally, the memory today works as a cache device, but with additional functionality like decompressing the data or other form of data processing. So, if there is a need for additional processing then there will be no benefits even if the new memory cost will be on par with HDD cost.
The only benefit I see is the case for data not being preprocessed before it is used, like in reporting applications, for example. In such case it is possible to address the data directly in the non-volatile RAM and skip the caching stage of the processing. So, the benefit here is equal to the caching time (time to read the data into volatile memory). If the speed of new memory is even a bit lower than the speed of volatile RAM, then the benefit of skipping the caching stage will be decreased quickly with the data size growth and after some threshold the benefit will be negative.
And the conclusion can be as such - it is required to know the exact speed, cost, power consumption and physical volume of the final solution to assess it's applicability for a particular use case.
Second problem is the possible environment change between power off and on. Some code should be executed to prevent a crash after remote network client has closed connection or after a user just changed his video card while the PC was off.
Next problem is the processor. If a computer is switched off, then the processor loses it's state, so the program it was executing now is unable to continue after the power is on.
And finally, the memory today works as a cache device, but with additional functionality like decompressing the data or other form of data processing. So, if there is a need for additional processing then there will be no benefits even if the new memory cost will be on par with HDD cost.
The only benefit I see is the case for data not being preprocessed before it is used, like in reporting applications, for example. In such case it is possible to address the data directly in the non-volatile RAM and skip the caching stage of the processing. So, the benefit here is equal to the caching time (time to read the data into volatile memory). If the speed of new memory is even a bit lower than the speed of volatile RAM, then the benefit of skipping the caching stage will be decreased quickly with the data size growth and after some threshold the benefit will be negative.
And the conclusion can be as such - it is required to know the exact speed, cost, power consumption and physical volume of the final solution to assess it's applicability for a particular use case.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
- BASICFreak
- Member
- Posts: 284
- Joined: Fri Jan 16, 2009 8:34 pm
- Location: Louisiana, USA
Re: Non-Volatile RAM
If a system with no volatile RAM were to be built - all OSes would need to be rebuilt (and possibly FSes) (unless you emulated x86/x86_64 Memory & Disk structure somehow) and the OS would likely have a pagefile which it clears every boot that acts as volatile memory. This file may only need to be a couple of GB as all the read-only parts of of an executable can be mapped directly to the FS in Non-Volatile RAM. Each boot would be about the same as a normal system, start at an offset in NV-RAM (lets just say 0x00000000 (32Bit PMode or Long)) we would have something similar to an MBR here, which would clear said pagefile and load an OS as normal; and hibernate/sleep would just restore CPU/FPU/etc registers on boot and run (just like a standard scheduler + video registers) - also likely built into boot code. Power outage would still cause a "fresh" boot.
This would at least be far quicker than our normal HDDs and SSDs if the speeds are the same or better than our current DDR RAM.
But if this type of system is built I would hope they would rebuild the CPU and Video Cards and everything else to be NON VOLATILE so the system can come back exactly as it was before a power outage. But the OS would still need a way to have a "volatile" source for "fresh" boots so it would be required to reset every device in the system and set memory spaces to 0.
This would at least be far quicker than our normal HDDs and SSDs if the speeds are the same or better than our current DDR RAM.
But if this type of system is built I would hope they would rebuild the CPU and Video Cards and everything else to be NON VOLATILE so the system can come back exactly as it was before a power outage. But the OS would still need a way to have a "volatile" source for "fresh" boots so it would be required to reset every device in the system and set memory spaces to 0.
BOS Source Thanks to GitHub
BOS Expanded Commentary
Both under active development!
BOS Expanded Commentary
Both under active development!
Sortie wrote:
- Don't play the role of an operating systems developer, be one.
- Be truly afraid of undefined [behavior].
- Your operating system should be itself, not fight what it is.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: Non-Volatile RAM
I would assume that such a device would be internet-present. Therefore, I see security as the biggest challenge here. You can never let anything in that is undesirable -- as once something is in memory it cannot be removed with a simple reboot. You could always have a program that is responsible for reaping anything that is determined to be undesirable, but that would have to be identified by a user (Should I remove kernel.bin? Is that important?) or receive updates from a central pattern database (opportunity for subscription fees? -- or worse a perception problem?). But far better to not allow anything like that in to begin with.Brendan wrote:what would you do differently to existing OSs?
In short, I think your security would have to be rock-solid and impenetrable.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: Non-Volatile RAM
Back in the day it wasn't uncommon to run the OS straight from ROM.
There are a couple of NVRAM technologies on the horizon now: memristors and 3D XPoint.
At a practical level, a lot depends (in no particular order) on:
[*] whether you can read/write partial lines or pages or however it is exposed to the CPU
[*] whether transactions are supported at all, and if so, how big they can be and whether they can span VRAM and NVRAM in the same transaction
[*] how much power it takes to read/write
[*] whether it can eventually arrive on-chip rather than off-chip
There are a couple of NVRAM technologies on the horizon now: memristors and 3D XPoint.
At a practical level, a lot depends (in no particular order) on:
[*] whether you can read/write partial lines or pages or however it is exposed to the CPU
[*] whether transactions are supported at all, and if so, how big they can be and whether they can span VRAM and NVRAM in the same transaction
[*] how much power it takes to read/write
[*] whether it can eventually arrive on-chip rather than off-chip