when i think nvdimm 3dxpoint it is mind boggling
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
when i think nvdimm 3dxpoint it is mind boggling
Intel was promoting this hard during idf16.
NVDIMM = DIMM modules that can hold data while PC/server is powered off.
Possibility seems mind boggling from either O/S or app point of view:
First casualty: int13h service is gone, since no HDD is needed. All the disk storage protocols are obsolete as a result.
Either all operating system and applications nowadays since the dawn of computer is to be rewritten if to take advantage of NVDIMM. Since they are all written in the sense: load from disk and save to disk.
May be that is too much work, perhaps there will be a new class of applications and O/S-s (Windows 1x where x > 0) where they dont load from disk rather just stay in the memory.
Purchase Windows 1x, x > 0) DVD then install to memory, haha.
Sounds like a science fiction but it really happening.
NVDIMM = DIMM modules that can hold data while PC/server is powered off.
Possibility seems mind boggling from either O/S or app point of view:
First casualty: int13h service is gone, since no HDD is needed. All the disk storage protocols are obsolete as a result.
Either all operating system and applications nowadays since the dawn of computer is to be rewritten if to take advantage of NVDIMM. Since they are all written in the sense: load from disk and save to disk.
May be that is too much work, perhaps there will be a new class of applications and O/S-s (Windows 1x where x > 0) where they dont load from disk rather just stay in the memory.
Purchase Windows 1x, x > 0) DVD then install to memory, haha.
Sounds like a science fiction but it really happening.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
Re: when i think nvdimm 3dxpoint it is mind boggling
Hi,
About a year ago I started wondering what an OS would do differently if all memory was non-volatile. The result of that was surprisingly anticlimactic.
For example; in case you want/need to replace the OS or want to have multiple OSs installed, you'd want a standardised partitioning scheme for memory (so one OS doesn't trash all your data because it doesn't know what another OS used the memory for); and in that case it ends up being mostly the same as traditional/existing OS with a few minor differences ("/dev/nva1" might be used for the OS's root file systems instead of "/dev/sda1", the file system might be designed specifically to align to page boundaries, memory mapped files would becomes considerably faster, etc).
Note: if this wasn't the case (existing OSs don't work), it'd create a relatively severe marketing challenge for Intel - "Give us money for something that is going to break all the software you already own" isn't a great advertising campaign.
Cheers,
Brendan
About a year ago I started wondering what an OS would do differently if all memory was non-volatile. The result of that was surprisingly anticlimactic.
For example; in case you want/need to replace the OS or want to have multiple OSs installed, you'd want a standardised partitioning scheme for memory (so one OS doesn't trash all your data because it doesn't know what another OS used the memory for); and in that case it ends up being mostly the same as traditional/existing OS with a few minor differences ("/dev/nva1" might be used for the OS's root file systems instead of "/dev/sda1", the file system might be designed specifically to align to page boundaries, memory mapped files would becomes considerably faster, etc).
For BIOS and UEFI, the existing functions to access disk and/or file systems will remain exactly the same. E.g. you'd still use "int 0x13 extensions" to load data from "device 0x80"; it's just that "device 0x80" would be non-volatile RAM instead of a hard disk. That way, all existing OSs would still work (and would only need a very simple "storage device driver").ggodw000 wrote:First casualty: int13h service is gone, since no HDD is needed. All the disk storage protocols are obsolete as a result.
Note: if this wasn't the case (existing OSs don't work), it'd create a relatively severe marketing challenge for Intel - "Give us money for something that is going to break all the software you already own" isn't a great advertising campaign.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: when i think nvdimm 3dxpoint it is mind boggling
It is also important to keep in mind that this is not entirely a new topic in OS design; several other memory technologies used in the past, most notably core, bubble memory, and SRAM, were also non-volatile. The main reasons that dynamic-refresh memory became ubiquitous for primary storage after 1968 were operating speed and production cost; DRAM is cheap, low-power, and most of all very, very fast, and has gotten faster every year.
The main thing that NVDIMM does is make high-speed solid-state memory non-volatile, which is itself a feat. It does make moving towards persistent-state operating systems
You will note, first, that almost no persistent OS so far (e.g., CaprOS) has been built exclusively on non-volatile memory; you still need to be able to boot from cold in cases where, for example, memory inconsistencies arise.
Second, note that in none of those cases was there any consideration to eliminating secondary storage entirely. Even NVDIMM is not really stable enough for long-term storage (nor is flash-based bulk storage, for that matter, which is one of the reasons SSDs haven't driven disk drives out of the market yet despite the drop in price), so archival storage will still require hard disks, optical disks, and maybe even tape backup for staged archiving. While it is likely that laptops and even many desktop PCs (which are becoming a niche item anyway as most people use tablets for everything and put their data on snicker 'cloud' servers) will drop the use of hard disks, they will probably keep flash drives for cold boots, while server environments will simply treat NVDIMM as another stage to their staged archival systems.
The main thing that NVDIMM does is make high-speed solid-state memory non-volatile, which is itself a feat. It does make moving towards persistent-state operating systems
You will note, first, that almost no persistent OS so far (e.g., CaprOS) has been built exclusively on non-volatile memory; you still need to be able to boot from cold in cases where, for example, memory inconsistencies arise.
Second, note that in none of those cases was there any consideration to eliminating secondary storage entirely. Even NVDIMM is not really stable enough for long-term storage (nor is flash-based bulk storage, for that matter, which is one of the reasons SSDs haven't driven disk drives out of the market yet despite the drop in price), so archival storage will still require hard disks, optical disks, and maybe even tape backup for staged archiving. While it is likely that laptops and even many desktop PCs (which are becoming a niche item anyway as most people use tablets for everything and put their data on snicker 'cloud' servers) will drop the use of hard disks, they will probably keep flash drives for cold boots, while server environments will simply treat NVDIMM as another stage to their staged archival systems.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Re: when i think nvdimm 3dxpoint it is mind boggling
Two out of three's not bad, I guess. SRAM may not require refresh cycles like DRAM, but it's just as volatile once the power goes off.Schol-R-LEA wrote:core, bubble memory, and SRAM, were also non-volatile
Actually, most of all, it's cheap -- SRAM tends to be faster, but also more expensive because of its higher transistor count per bit, and much more power-hungry for the same reason, limiting it (these days) to applications such as caches, where the advantages of a small amount of very fast memory outweigh its other costs.Schol-R-LEA wrote:DRAM is cheap, low-power, and most of all very, very fast, and has gotten faster every year.
Edit: minor text fixes.
Those who understand Unix are doomed to copy it, poorly.
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: when i think nvdimm 3dxpoint it is mind boggling
good points, i know it is kinda hard to unplug from this load and save mentality because it has been there since dawn of computer. granted, i have no idea how the state of O/S in memory looks like vs. how it looks like in disk drive in installed state. Learning that I think can give more insight.
it is definitely possible to use the NVDIMM just like DIMM only in that, perhaps S3 and S4 states might be unnecessary at the least.
Perhaps storing everything in RAM requires some kind of new dogma that is entirely separate from current implementation.
The stability and life of NVDIMM is also concern but hey disk drive also fails right? So I think it can be solved by RAID or similar HA techniques.
it is definitely possible to use the NVDIMM just like DIMM only in that, perhaps S3 and S4 states might be unnecessary at the least.
Perhaps storing everything in RAM requires some kind of new dogma that is entirely separate from current implementation.
The stability and life of NVDIMM is also concern but hey disk drive also fails right? So I think it can be solved by RAID or similar HA techniques.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
-
- Member
- Posts: 396
- Joined: Wed Nov 18, 2015 3:04 pm
- Location: San Jose San Francisco Bay Area
- Contact:
Re: when i think nvdimm 3dxpoint it is mind boggling
yes memory inconsistencies are issue. hmm now it looks more complicated.Schol-R-LEA wrote:It is also important to keep in mind that this is not entirely a new topic in OS design; several other memory technologies used in the past, most notably core, bubble memory, and SRAM, were also non-volatile. The main reasons that dynamic-refresh memory became ubiquitous for primary storage after 1968 were operating speed and production cost; DRAM is cheap, low-power, and most of all very, very fast, and has gotten faster every year.
The main thing that NVDIMM does is make high-speed solid-state memory non-volatile, which is itself a feat. It does make moving towards persistent-state operating systems
You will note, first, that almost no persistent OS so far (e.g., CaprOS) has been built exclusively on non-volatile memory; you still need to be able to boot from cold in cases where, for example, memory inconsistencies arise.
Second, note that in none of those cases was there any consideration to eliminating secondary storage entirely. Even NVDIMM is not really stable enough for long-term storage (nor is flash-based bulk storage, for that matter, which is one of the reasons SSDs haven't driven disk drives out of the market yet despite the drop in price), so archival storage will still require hard disks, optical disks, and maybe even tape backup for staged archiving. While it is likely that laptops and even many desktop PCs (which are becoming a niche item anyway as most people use tablets for everything and put their data on snicker 'cloud' servers) will drop the use of hard disks, they will probably keep flash drives for cold boots, while server environments will simply treat NVDIMM as another stage to their staged archival systems.
key takeaway after spending yrs on sw industry: big issue small because everyone jumps on it and fixes it. small issue is big since everyone ignores and it causes catastrophy later. #devilisinthedetails
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: when i think nvdimm 3dxpoint it is mind boggling
OK, that was just careless on my part. Thank you for the correction.Minoto wrote:Two out of three's not bad, I guess. SRAM may not require refresh cycles like DRAM, but it's just as volatile once the power goes off.Schol-R-LEA wrote:core, bubble memory, and SRAM, were also non-volatile
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
-
- Member
- Posts: 510
- Joined: Wed Mar 09, 2011 3:55 am
Re: when i think nvdimm 3dxpoint it is mind boggling
But here's just the thing: NVRAM (in the form of magnetic core memory, or, earlier on, magnetic drums) *was* the primary form of RAM *during* most of the dawn of computing, but systems were still structured according to a "load and save mentality".ggodw000 wrote:good points, i know it is kinda hard to unplug from this load and save mentality because it has been there since dawn of computer.
NVRAM is not some monumental new thing, it's old school. It had some advantages, but it was moved away from because volatile RAM was cheaper to produce. New technologies may make it practical for computers to use NVRAM for working memory again, but it won't cause a revolutionary paradigm shift. The working memory / long term storage split will still be there, just like it was last time computers were built with non volatile working memory.
Re: when i think nvdimm 3dxpoint it is mind boggling
IMO for desktops particularly, this may eliminate the disk/page cache eventually and scrap a few hundred pages from the OS design textbooks. For server SANs, with possibly tiered storage you end up having cache on top of the cache on top of the cache. What I mean is, main memory is just another tier there, and while non-volatility will save you from having to write your redo journal, it will not save you from having to manage space and intelligently fetch into it, evict from it.
Some other random thoughts. Non-volatile cache, battery backed cache etc. have been in some devices for quite some time. Probably the same techniques could have been applied to main memory a long time ago, but no OS is engineered around such architecture.
In contrast, with cloud virtualization, a VM may be dedicated to single application instance. Having non-volatile memory in a server with such short life span of each machine may be of limited benefit.
It's almost frustrating how the hardware devs change the direction of software development without much ado.
Some other random thoughts. Non-volatile cache, battery backed cache etc. have been in some devices for quite some time. Probably the same techniques could have been applied to main memory a long time ago, but no OS is engineered around such architecture.
In contrast, with cloud virtualization, a VM may be dedicated to single application instance. Having non-volatile memory in a server with such short life span of each machine may be of limited benefit.
It's almost frustrating how the hardware devs change the direction of software development without much ado.
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: when i think nvdimm 3dxpoint it is mind boggling
Ehhh, not likely. "Page/Disk Caches" are meant to handle common/recurring or latest requests. The can be useful in NVRAM, due to NVRAM being potentially corrupted by another OS after any shutdown/logout/whatever you as an OSDev call it.simeonz wrote:IMO for desktops particularly, this may eliminate the disk/page cache eventually and scrap a few hundred pages from the OS design textbooks.
Servers have a much better use for NVRAM (Which is never comparable to SANs, ever!) The only problem arises when AppDevs write cache systems themselves and/or SysAdmins know what they are doing. This essentially leads to AppDevs writing code for LCD and just accepting it, and therefor SysAdmins have to deal with it because LCD. So basically, it won't happen without serious effort on any NVRAM's Manufacturers Marketing department.simeonz wrote:For server SANs, with possibly tiered storage you end up having cache on top of the cache on top of the cache. What I mean is, main memory is just another tier there, and while non-volatility will save you from having to write your redo journal, it will not save you from having to manage space and intelligently fetch into it, evict from it.
Only if the HV is specifically designed to solve that "problem". Otherwise, its only space the HV can use.simeonz wrote: ... In contrast, with cloud virtualization, a VM may be dedicated to single application instance. Having non-volatile memory in a server with such short life span of each machine may be of limited benefit.
- Monk
Re: when i think nvdimm 3dxpoint it is mind boggling
I was referring to the fact that the page cache is essentially a map of files into RAM pages - ptes for mapped files and explicit kernel tries. If you take ext3, inodes contain very similar trie like structure that maps file offsets to allocated clusters. Now, a hypothetical netbook that uses say 64GB of NVRAM could stuff the entire local filesystem in memory. And the fs would satisfy the application request just as fast as the cache would by directly reading from its own file allocations map. Caching the io would only duplicate the data. If the fs uses compression, encryption, or is a network fs, that is a different story. It may have been a hasty statement, but this is what I meant.Ehhh, not likely. "Page/Disk Caches" are meant to handle common/recurring or latest requests. The can be useful in NVRAM, due to NVRAM being potentially corrupted by another OS after any shutdown/logout/whatever you as an OSDev call it.
You mentioned corruption. Current storage drives use quite a lot of redundancy to correct and detect bit rot. In fact, both SSDs and HDDs use so much ECC that in practice the only undetected errors are the ones that occur on the data path to the media and not due to aging of the media itself. If any NVRAM technology is to be used as replacement for persistent storage, it should provide similar robustness. (Errors are not a problem. But undetected errors are.)
I was speculating that NVRAM could become ubiquitous replacement for RAM for at least some fraction of the market. (Due to decreasing manufacturing cost, due to increased mass production, etc.) In such case, some software would be written specifically for it.The only problem arises when AppDevs write cache systems themselves and/or SysAdmins know what they are doing. This essentially leads to AppDevs writing code for LCD and just accepting it, and therefor SysAdmins have to deal with it because LCD. So basically, it won't happen without serious effort on any NVRAM's Manufacturers Marketing department.
Re: when i think nvdimm 3dxpoint it is mind boggling
As has been mentioned, probably the filesystem would be unaffected since you still need a way to arrange the data somehow. And programs will still need to allocate and free memory as needed.
I assume the biggest change may be from allocating large chunks of memory. Doing it in RAM is a problem since it needs to be a contiguous range in the virtual space of the process which makes it awful when it needs to be resized for whatever reason (and it ends up running against another already allocated range). Files have this problem already abstracted out (by forcing an indirection), but storage media is slow so you may want to avoid it when performance is a concern. But if file accesses are likely to be about as fast as a RAM access, it's likely more programs would be adapted to just use files for their allocations instead (removing the management overhead from the application itself).
Certainly not that drastic of a change but it may still be important.
I assume the biggest change may be from allocating large chunks of memory. Doing it in RAM is a problem since it needs to be a contiguous range in the virtual space of the process which makes it awful when it needs to be resized for whatever reason (and it ends up running against another already allocated range). Files have this problem already abstracted out (by forcing an indirection), but storage media is slow so you may want to avoid it when performance is a concern. But if file accesses are likely to be about as fast as a RAM access, it's likely more programs would be adapted to just use files for their allocations instead (removing the management overhead from the application itself).
Certainly not that drastic of a change but it may still be important.
-
- Member
- Posts: 96
- Joined: Sat Mar 15, 2014 3:49 pm
Re: when i think nvdimm 3dxpoint it is mind boggling
There are some specific scenarios whether things will change drastically, e.g. most databases today are built around WAL and other attempts at mitigating the hit of fflush. Think how fast and consistent databases will become when running from unbuffered NVRAM, if they rearchitect themselves.
Re: when i think nvdimm 3dxpoint it is mind boggling
You can resize / move files in the disk because they are managed ( a file does not reference a position in the disk. Rather, they refer to other files by their relative/absolute paths.)
In a managed software, you can defragment the virtual memory, because you control every pointer.
Ram , disk or NVRAM will have the same problem if the resources you put there are not managed.
In a managed software, you can defragment the virtual memory, because you control every pointer.
Ram , disk or NVRAM will have the same problem if the resources you put there are not managed.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: when i think nvdimm 3dxpoint it is mind boggling
If NVRAM becomes commonplace in computer systems, I don't imagine it being used as a long-term storage device. Personally I would use it as an alternative to suspend (which requires a little bit of power to keep the memory contents and fails if the power goes off) and hibernate (which requires disk space and takes extra time when loading/saving the memory contents) and for recovering data if the power goes off (although that might be a little difficult, as one couldn't just carry on as if nothing had happened as the CPU state is lost - if CPUs and other internal components could dump their state to onboard NVRAM in the case of power failure I think that would have a lot of potential, but a lot of extra protocols would be necessary to tell the devices to restore their state - remember that we're talking about every single device in the computer: CPU, memory controller, PCI controller, every installed PCI card, ISA controller, PS/2 controller, DMA controller, hard disk's onboard controller, and so on). NVRAM would also provide an interesting "enhancement" to pre-fetching - instead of pre-fetching important libraries during boot, they could simply be kept in memory all the time and be available as soon as the system starts booting (as long as no other system has overwritten them in the meantime - there would need to be some way to reliably determine if they're still intact - a checksum should suffice, although takes a few extra milliseconds) and would only be removed from memory if absolutely necessary (i.e. low-memory situation).
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing