You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment. (also, have you ever tried the m68k architecture?)Yoda wrote:But many won't. The less granularity you use, the more efficient encoding you can implement. My opinion is that 1 byte granularity is the perfect balance between decoding speed and memory usage.Combuster wrote:Many of the individual steps you want to take can prefectly well be encoded in two bytes
OSDev's dream CPU
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: OSDev's dream CPU
Re: OSDev's dream CPU
ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented . Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
Yes, it is also good, although not perfect.Combuster wrote:(also, have you ever tried the m68k architecture?)
You better remember PDP-11 that was true 16-bit granular and the transition to it's successor - VAX-11. DEC denied 16-bit granularity in favor to byte-oriented architecture.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: OSDev's dream CPU
Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...Yoda wrote:ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented . Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...
Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).
And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on
Re: OSDev's dream CPU
Sidenote 1: Thumb vs. ARM performance very much depends upon the benchmark in question.Owen wrote:Because inventing three whole instruction sets in those 24 years (ARM, now A32, 1985, introduced with ARMv1; Thumb, 1994, introduced with ARMv4T, vastly extended to Thumb-2, 2005 [which equals ARM mode in performance, and beats it when memory bandwidth is limited], renamed T32 by ARMv8; A64, introduced with ARMv8, a whole new ISA for the 64-bit architecture, 2012) is clearly evidence of just making the old go faster, and not evidence of forward thinking design and redesign...Yoda wrote:ARM introduced THUMB because they realized the drawbacks of 32-bit granularity. Who knows, may be one day they'll release TINY architecture, – byte oriented . Truly speaking, commercial success is not always an indicator of ISA perfection. You know, Intel/AMD dies have quite optimized internals being ugly with ISA. ARM architecture exists since mid-eighties, i.e. it has more than 25 years history. Methinks, ARM is going the same way as Intel, - they try to support and develop rather outdated RISC ideas by progressive internal technologies.Combuster wrote:You're entitled to your opinion, but ARM's THUMB disagrees with that sentiment.
You may also note that A64 returns to 32-bit instruction granularity, and redefines the register file to contain 31 registers, and completely replaces how the operating modes work...
Based on tradition, we can expect something along the lines of for ARMv9 to deprecate A32/T32 and probably remove support for the system instructions, ARMv10 to make them optional, ARMv11 to completely remove them. ARM do not shy from vast definitions of their architecture, first seen with the removal of 26-bit mode, and the continual evolution of the system-mode architecture (It is not expected that an ARMvN OS will run unmodified on an ARMvM processor).
And, besides: By the point that ARMv11 comes around, one can expect that emulation performance will be such that emulating 32-bit code will be performance competitive with the ARMv7 cores it was designed to run on
Sidenote 2: ARM has been able to rewrite its architecture so easily because its end users generally don't have an upgrade path - they are pinned to a device with no way to change the hardware, and normally on a stable platform. That is changing now with the heterogeneous landscape of tablets, phones and servers coming out. I'd expect AArch64 to stay around in its current guise for quite some time.
- drunkenfox
- Member
- Posts: 46
- Joined: Tue Mar 13, 2012 10:46 pm
Re: OSDev's dream CPU
Mine would be:
Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
;goodbye OS, hello BIOS
mov eax, FFFFFFF0h
jmp eax
mov eax, FFFFFFF0h
jmp eax
-
- Member
- Posts: 510
- Joined: Wed Mar 09, 2011 3:55 am
Re: OSDev's dream CPU
I wouldn't really classify clock speed or number of cores as features that are really "dream CPU" material for me as a developer. As a *user*, sure, but a developer doesn't care whether his code is run on a 2 GHz or a 3 GHz implementation of an architecture (indeed, it will almost certainly end up running on both). He does have to pay more attention to number of cores, but a good SMP implementation will still run across a range of core numbers. It's more instruction set and memory management that matter to a developer (and even then, instruction set probably matters more to compiler developers than OS developers, given that even OS developers don't do that much coding in assembly).ponyboy wrote:Mine would be:
Architecture - MIPS based "lasy a$$" assembly
Bits - 32 or 64
Cores - 2, 3, or 4
Speed - 2.1+ GHz
Also, even for a user, you aren't dreaming very big. 2.1 GHz dual core is positively mainstream now, and in 20 years may look downright slow. As a user, I want a 10^34 GHz clock speed with as many cores as physically possible (not very realistic, of course, but this is a *dream* CPU).