Page 9 of 9

Re: OSDev's dream CPU

Posted: Fri Jul 27, 2012 6:28 pm
by Combuster
Yoda wrote:
Combuster wrote:Many of the individual steps you want to take can prefectly well be encoded in two bytes
But many won't. The less granularity you use, the more efficient encoding you can implement. My opinion is that 1 byte granularity is the perfect balance between decoding speed and memory usage.
You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment. (also, have you ever tried the m68k architecture?)

Re: OSDev's dream CPU

Posted: Sat Jul 28, 2012 5:10 am
by Yoda
Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Combuster wrote:(also, have you ever tried the m68k architecture?)
Yes, it is also good, although not perfect.
You better remember PDP-11 that was true 16-bit granular and the transition to it's successor - VAX-11. DEC denied 16-bit granularity in favor to byte-oriented architecture.

Re: OSDev's dream CPU

Posted: Sat Jul 28, 2012 3:14 pm
by Owen
Yoda wrote:
Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...

You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...

Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).

And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on

Re: OSDev's dream CPU

Posted: Wed Aug 08, 2012 11:47 am
by JamesM
Owen wrote:
Yoda wrote:
Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented :D. Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.
Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...

You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...

Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).

And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on
Sidenote 1: Thumb vs. ARM performance very much depends upon the benchmark in question.
Sidenote 2: ARM has been able to rewrite its architecture so easily because its end users generally don't have an upgrade path - they are pinned to a device with no way to change the hardware, and normally on a stable platform. That is changing now with the heterogeneous landscape of tablets, phones and servers coming out. I'd expect AArch64 to stay around in its current guise for quite some time.

Re: OSDev's dream CPU

Posted: Thu Oct 18, 2012 7:06 pm
by drunkenfox
Mine would be:

Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz

Re: OSDev's dream CPU

Posted: Fri Oct 19, 2012 3:32 am
by linguofreak
ponyboy wrote:Mine would be:

Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
I wouldn't really classify clock speed or number of cores as features that are really "dream CPU" material for me as a developer. As a *user*, sure, but a developer doesn't care whether his code is run on a 2 GHz or a 3 GHz implementation of an architecture (indeed, it will almost certainly end up running on both). He does have to pay more attention to number of cores, but a good SMP implementation will still run across a range of core numbers. It's more instruction set and memory management that matter to a developer (and even then, instruction set probably matters more to compiler developers than OS developers, given that even OS developers don't do that much coding in assembly).

Also, even for a user, you aren't dreaming very big. 2.1 GHz dual core is positively mainstream now, and in 20 years may look downright slow. As a user, I want a 10^34 GHz clock speed with as many cores as physically possible (not very realistic, of course, but this is a *dream* CPU).