OSDev's dream CPU

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: OSDev's dream CPU

Post by Combuster »

Yoda wrote:
Combuster wrote:Many of the individual steps you want to take can prefectly well be encoded in two bytes
But many won't. The less granularity you use, the more efficient encoding you can implement. My opinion is that 1 byte granularity is the perfect balance between decoding speed and memory usage.
You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment. (also, have you ever tried the m68k architecture?)
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Yoda
Member
Member
Posts: 255
Joined: Tue Mar 09, 2010 8:57 am
Location: Moscow, Russia

Re: OSDev's dream CPU

Post by Yoda »

Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Combuster wrote:(also, have you ever tried the m68k architecture?)
Yes, it is also good, although not perfect.
You better remember PDP-11 that was true 16-bit granular and the transition to it's successor - VAX-11. DEC denied 16-bit granularity in favor to byte-oriented architecture.
Yet Other Developer of Architecture.
OS Boot Tools.
Russian national OSDev forum.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: OSDev's dream CPU

Post by Owen »

Yoda wrote:
Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...

You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...

Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).

And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: OSDev's dream CPU

Post by JamesM »

Owen wrote:
Yoda wrote:
Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...

You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...

Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).

And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on
Sidenote 1: Thumb vs. ARM performance very much depends upon the benchmark in question.
Sidenote 2: ARM has been able to rewrite its architecture so easily because its end users generally don't have an upgrade path - they are pinned to a device with no way to change the hardware, and normally on a stable platform. That is changing now with the heterogeneous landscape of tablets, phones and servers coming out. I'd expect AArch64 to stay around in its current guise for quite some time.
User avatar
drunkenfox
Member
Member
Posts: 46
Joined: Tue Mar 13, 2012 10:46 pm

Re: OSDev's dream CPU

Post by drunkenfox »

Mine would be:

Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
;goodbye OS, hello BIOS
mov eax, FFFFFFF0h
jmp eax
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: OSDev's dream CPU

Post by linguofreak »

ponyboy wrote:Mine would be:

Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
I wouldn't really classify clock speed or number of cores as features that are really "dream CPU" material for me as a developer. As a *user*, sure, but a developer doesn't care whether his code is run on a 2 GHz or a 3 GHz implementation of an architecture (indeed, it will almost certainly end up running on both). He does have to pay more attention to number of cores, but a good SMP implementation will still run across a range of core numbers. It's more instruction set and memory management that matter to a developer (and even then, instruction set probably matters more to compiler developers than OS developers, given that even OS developers don't do that much coding in assembly).

Also, even for a user, you aren't dreaming very big. 2.1 GHz dual core is positively mainstream now, and in 20 years may look downright slow. As a user, I want a 10^34 GHz clock speed with as many cores as physically possible (not very realistic, of course, but this is a *dream* CPU).
Post Reply