azblue wrote:
To me limiting support for physical RAM amounts you've actually tested is a bit like writing a file system that only allows file names you've actually tested. If you've stored one character, you've stored them all; if you've mapped one gigabyte, you've mapped them all.
If you have 10TB drive and your FS is supposed to support any size files, then I'd at least test creating a 10TB file. I probably wouldn't limit it to 10TB just because I didn't have access to larger drive.
Of course with RAID and such you can "virtually" extend the drive and test even larger files.
I also would use unit tests to "prove" that my memory manager works under all (supported) conditions. So I probably wouldn't add a 2TB RAM limit, but that's just me.
azblue wrote:
I just can't help but feel MS is being unreasonably paranoid. Is it a combination of a multi-billion-dollar reputation on the line combined with the resources to obtain multi-terabyte machines that allow them to limit their OS's physical RAM usage to only that which they've tested, or is there a legitimate reason for concern?
AFAIK there's nothing bad waiting around the corner after 2TB.. So in that sense MS is being "paranoid".
azblue wrote:
I suppose what I'm getting at -- and this is the reason I've posted this in Design & Theory -- is this: Should we limit our OSes only to that which we have tested on real hardware, or is it reasonably safe to assume once a PMM algorithm has been thoroughly tested it can scale as high as the CPU will support?
As said, I wouldn't (except potentially for licensing reasons, but I doubt that applies to any of us =)). However I would use _extensive_ unit testing with all the core stuff, including PMM and VMM. Proving that it works under all supported conditions, and boot gives error for example if there's too little memory to work with..