Page 1 of 1
problems with x86 arch
Posted: Mon Jun 23, 2008 10:32 am
by suthers
I noticed that many people think that there a problems with the x86 architecture, e.g. Clicker quit his project because off the 'devils of backward-compatibility', ok there are a few problems with that, but I don't think that any main stream architecture wont eventually have backward compatibility problems, because it has to evolve...
Personally I haven't ever found any problems with it, but then again I've never worked with any other architecture, so my view are a bit biased (well not biased, but made without full knowledge of the 'terrain').
There are other people who seem to say it is illogical, etc...
I don't see any specific problems....
Could anybody enlighten me on why so many people dislike this architecture?
Thanks,
Jules
Re: problems with x86 arch
Posted: Mon Jun 23, 2008 10:49 am
by Korona
Instruction decoding is very complicated and slow on the x86 platform (the instruction length depends on the opcode). Many instructions can only operate on certain registers (e.g. div only works on edx:eax); that makes compiler development hard. There are many slow special purpose instructions that nobody uses (aaa, aad, bt, bts, stosb, enter, leave, loop, lsl, just to name a few). There is a huge amount of obsolete stuff e.g. the BIOS, real mode, 16-bit protected mode, v86 mode, (32-bit protected mode, system management mode), pit, pic, vga, a20 gate and the rtc.
EDIT: x86 segmentation is also very complicated and useless.
Re: problems with x86 arch
Posted: Mon Jun 23, 2008 11:30 am
by Combuster
I dunno where you got that information from, but you're off quite a bit
BTS (or BTC) is a very useful synchronisation feature as it can perform a test-and-set operation. LEAVE is used in almost all compiler implementations
32-bit protected mode is not obsolete by far. The majority of machines in use can't do the "logical" alternative: long mode.
v8086 mode is much too popular as well. Try counting the amount of posts on the subject here and see why it is so useful.
A BIOS will not ever be obsolete. Every computer needs a method to bootstrap itself. The way it manifests however is not unlikely to change in the future, i.e. via EFI or similar.
If you want to kill the RTC, you'll remove the one source of time the computer has. If you kill the VGA, you will get rid of the one method that allows all computers to access the screen when it doesn't have the appropriate video driver. Now go tell the user to install his ATI driver when it can't see what it's doing.
The problem with the x86 architecture is not what it's not capable of, because it supports practically everything you can imagine. The problem is that over time, it accumulated lots of new features, while the majority of old ones can't be removed, leaving the end programmer to take care of all these features while he most likely doesn't care about using. In essence, the programmer is given too much freedom that makes decisionmaking a very hard thing to do as there are by far more options than the average other system. There's nothing useless about it really - it just means you never came across that application that fits that feature's purpose.
It's a challenge. And either you like this challenge, or you don't.
Re: problems with x86 arch
Posted: Mon Jun 23, 2008 12:05 pm
by Korona
Combuster wrote:BTS (or BTC) is a very useful synchronisation feature as it can perform a test-and-set operation. LEAVE is used in almost all compiler implementations
BTS and BTC are slow compared to other instructions. They are useful for syncronisation, that is true. You will never see a compiler using leave when optimization is turned on.
32-bit protected mode is not obsolete by far. The majority of machines in use can't do the "logical" alternative: long mode.
That's why I wrote it in brackets.
v8086 mode is much too popular as well. Try counting the amount of posts on the subject here and see why it is so useful.
Its only use is to run legacy software. Therefore it is obsolete. I don't think removing v86 mode would be good but I think removing both v86 mode and realmode would be good. (less complex CPUs => cheaper, smaller and more powerful CPUs) Legacy applications can run in emulators still archive decent speeds.
A BIOS will not ever be obsolete. Every computer needs a method to bootstrap itself. The way it manifests however is not unlikely to change in the future, i.e. via EFI or similar.
Yes, that's true. But the PC BIOS runs in realmode and was written for single tasking and single processor systems. There is no portable/safe/good way for modern operating systems to interact with the PC BIOS. (Well there is ACPI but ACPI is bloated and not very pretty.)
If you want to kill the RTC, you'll remove the one source of time the computer has. If you kill the VGA, you will get rid of the one method that allows all computers to access the screen when it doesn't have the appropriate video driver. Now go tell the user to install his ATI driver when it can't see what it's doing.
That's why we need better low level standards. The BIOS/firmware/whatever you call it should be able to switch video modes and plot pixels onto the screen in a device and operating system independent way. We have VESA but it only runs in realmode without special emulation. Make all BIOS functions work in protected mode or better make them work on a _simple_ virtual machine (that is not as bloated as ACPIs AML; e.g. a modified version of the java virtual machine) and many compatibility problems the x86 suffers from will disappear. Replace RTC by a modern timer e.g. HPET. (and make it easy to use i.e. design a smart way of enumerating it that does not rely on parsing ACPI AML code).
The problem with the x86 architecture is not what it's not capable of, because it supports practically everything you can imagine. The problem is that over time, it accumulated lots of new features, while the majority of old ones can't be removed, leaving the end programmer to take care of all these features while he most likely doesn't care about using. In essence, the programmer is given too much freedom that makes decisionmaking a very hard thing to do as there are by far more options than the average other system. There's nothing useless about it really - it just means you never came across that application that fits that feature's purpose.
Thats true, however it is only one aspect. If all obsolete features were removed from todays systems CPUs and motherboards would be a little cheaper and less complex. BIOS and operating system code would be prettier and easier to maintain (=> reduced costs). However it is hard for system manufacturers to remove those obsolete features from the system as doing that would break backwards compatibility. Backwards compatibility is generally a good thing but we don't need hardware that was obsolete 20 years ago.
Re: problems with x86 arch
Posted: Mon Jun 23, 2008 12:25 pm
by Combuster
You will never see a compiler using leave when optimization is turned on.
Like I said, I don't know where you got your information from
Code: Select all
D:\temp>gcc -O3 -c -o test.o temp.c
D:\temp>objdump -d test.o
test.o: file format coff-i386
Disassembly of section .text:
00000000 <_main>:
0: 55 push %ebp
1: b8 10 00 00 00 mov $0x10,%eax
6: 89 e5 mov %esp,%ebp
8: 83 ec 38 sub $0x38,%esp
....
36: c9 leave
37: c3 ret
Re: problems with x86 arch
Posted: Mon Jun 23, 2008 12:34 pm
by Korona
Well the Intel manual says that leave is bad but the AMD manuals suggests to use the instruction (I did not check the most recent version but I don't think anything changed):
Intel® 64 and IA-32 Architectures Optimization Reference Manual wrote:Assembly/Compiler Coding Rule 31. (ML impact, M generality) Avoid using
complex instructions (for example, enter, leave, or loop) that have more than four
μops and require multiple cycles to decode. Use sequences of simple instructions
instead.
AMD Athlon Processor x86 Code Optimization Guide wrote:The LEAVE instruction is a single-byte
instruction and thus saves two bytes of code space over the
MOV/POP epilogue sequence. Replacing the MOV/POP
sequence with LEAVE also preserves decode bandwidth.
It seems that this depends highly on the instruction decoding mechanism that is used by the processor. Thanks for clearing this up. Perhaps gcc won't use leave when you tell it to generate code for a P4.