Hi,
bewing wrote:Brendan wrote:Because 64-bit is "new", there's much less backward compatability mess to worry about.
Well, I think I disagree. The next big compatibility issue is going to be EFI -- and both of us are going to have to completely recode our boot processes to handle it (if it actually becomes a standard), so I think we're in exactly the same boat.
32-bit 80x86 Apple computers use EFI already.
bewing wrote:The thing with 64bit CPUs is: if Intel locks itself into a stupid design, and someone else makes a significantly better mousetrap -- then the next generation of 64bit platforms may not even be "Intel compatible". We are at the beginning of an entirely new chipwar here, and I want to see how it falls out before I invest in a lot of code. 32bit CPUs are fast enough for me, at 2 or 3GHz, if I write fast, tight code.
It doesn't work like that - no-one is allowed to make an 80x86 CPU without signing a cross-licencing deal with Intel. This means anything any manufacturer does is cross-licenced to Intel, and Intel can choose to do the same as other manufacturers (or not). Besides this there's a lot of market pressure to be compatible with everyone else, especially for application's code (and even more pressure to be compatable with Windows, for better or worse).
The main problem isn't competition between manufacturers, but the addition of new features. Take paging in 32-bit CPUs for an example - 80386 had "plain paging", 80486 introduced global pages and INVLPG, Pentium introduced PSE, Pentium Pro introduced PAE and Pentium III introduced PAT page-level cache controls. Almost something new with each new CPU type, and all of this is from Intel alone. With paging in long mode you only really need to care about whether or not NX (no-execute) is supported, but that's only a simple masking operation (no-where near as messy as entirely different paging structures for e.g.).
bewing wrote:Brendan wrote:BTW, how long is it going to take to write your OS and what sort of computers will be around when it's finished?
LOL -- I have a fair amount of free time, and I think I'm halfway done. The OS is booting to a prompt, and all my system buffers are properly initialized. Or perhaps I'm 90% finished, according to that Michael Abrash quote that Combuster posts. I've been putting in a lot of hours, trying to get it running soon, rather than just dinking around with "interesting" features. I think I could have a mini-IDE running on a GUI in 2 or 3 months. I'm building the assembler for it now.
I DO admit that it's already taken me longer to get where I am than I was hoping.
![Laughing :lol:](./images/smilies/icon_lol.gif)
I thought I was well past half-way done back in 1998 - I had a CLI, a GUI, good/stable memory management, scheduler, keyboard, generic video, serial, floppy, othello, etc. In the last 7 years I've progressed a lot - I learnt how crappy my OS was, did "about half" of a much better OS and now I'm about 1% through the latest rewrite. Michael Abrash is an optimist (unless you like unmarketable code).
bewing wrote:Brendan wrote:I'm thinking that in 10 years time "many-CPU" NUMA machines will be common, and 32-bit CPUs will be obsolete (except for embedded systems).
I'm guessing more like 15 years -- the Pentium isn't quite "obsolete" yet, as a design structure. In general, I agree, of course. But if I have a decent OS on it (not M$ crap), I'd be happy to still be running my 2Gz P4 machine in a decade, I think.
AMD's "multi-chip" machines are already NUMA. Intel's "multi-chip" machines will be NUMA in the next few years (see
Intel's CSI). The only real question is how quickly "multi-chip" machines become common, which depends on how many cores they can put on a chip before it overheats or before there isn't enough bandwidth to get data to/from the cores (and if you care about servers or not, as "multi-chip" is already common for servers).
Of course 64-bit is here now - I doubt anyone buying a new 80x86 desktop/server will get a 32-bit CPU. In 10 years time how many people will be using a 10 year old computer? How many people actually use 10 year old computers now? Plenty of people own them (like me) but I doubt there's many people who actually use them for desktop/server use - maybe one or 2 being used as a gateway or router running something like
SmoothWall (i.e. very similar to embedded systems).
bewing wrote:Brendan wrote:What happens when you've got a 30 MB file and an application appends 2 KB to the end of it? Will you store the extra 2 KB somewhere else on disk (fragment the file), or relocate the entire file somewhere else so that you can add that extra 2 KB to the end without overwriting other data and without fragmenting the file? How much time would it cost to relocate (read and write) 30 MB of data, and how much time would it cost if the file was fragmented?
An "old" file that is being actively modified is allocated new clusters from the 8K cluster pool. When the file's "aging" flag indicates that it's not active anymore, the file system manager has a low-priority daemon that rewrites files into a "properly" sized cluster sequence (in the middle of the night
![Wink :wink:](./images/smilies/icon_wink.gif)
).
When an "old" file that is being actively modified is allocated new clusters from the 8K cluster pool, is the entire file copied into new unfragmented space from the 8k cluster pool, or is the file fragmented?
Imagine you've got 3 files on disk (A, B and C) and the disk sectors look like this:
AAAABBBBCCCC--------
If you want to append data on the end of file B, do you do this:
AAAA----CCCCBBBBB---
Or do you do this:
AAAABBBBCCCCB-------
The first option means that you've got free space fragmentation and (from an application's perspective) appending a single byte could cause unacceptable delays as the entire file is copied elsewhere. The second option means that you've got file fragmentation.
Defragging during idle time is a good idea, but it's not the same as "fragmentation is impossible", and (for a reliable and versatile OS) isn't necessarily adequate on it's own because you can't guarantee that there will be enough idle time. For example, I use one of the computers here (mostly) as a games machine - it's either 100% busy or it's turned off (or it's booting or shutting down) - there is no idle time.
For servers you can have similar problems (but worse). For example, consider a server being used for NNTP and SMPT (newsgroups and email) where a large number of files are being modified. Are you going to expect adminstrators to take NNTP and SMPT offline for a few hours each day to give the OS enough idle time to defrag?
bewing wrote:Brendan wrote:How will you prevent ring 1 code from trampling on the kernel's memory area?
GDT level. Ring 1 does not use paging (only Ring 3). Ring 1's gdt entries only allow read access to any memory outside the current running app's data area, and the Ring 1 shared memory areas. There's only one excuse for those stinking segment registers, and that's to restrict memory accesses.
Brendan wrote:If you do prevent ring 1 code from trampling on the kernel's memory area, does that mean that ring 1 code will trample on other ring 1 code instead?
Hopefully not, but at least that won't BSOD the machine. I think it's mostly preventable.
This is better than no protection, but imagine you've got a sound card driver that occasionally trashes the disk driver's shared memory, and your file systems and/or swap space are occasionally being corrupted.
Cheers,
Brendan