Hi,
OSwhatever wrote:Brendan wrote:It "failed" because there wasn't a good enough compiler, and because it severely restricts the range of optimisations that could be done in future versions of the CPU (without causing performance problems for code tuned for older Itanium CPUs).
The problem in general with Itanium was that it was too complex to optimize for.
So you're saying that it "failed" because there wasn't a good enough compiler (and that there wasn't a good enough compiler because it was too complex to optimize for)?
OSwhatever wrote:Compiler instruction scheduling is the way forward when you go massively multicore as OOE in HW increases complexity a lot, multiply by all the cores you have and you save suddenly a lot if you remove it.
Compiler instruction scheduling could work if there's only ever one implementation of the architecture; or if all software is provided as source code (so that the end user can recompile it with a compiler that tunes the result to suit their specific CPU). Otherwise if you try to add more execution units or change the length of the pipeline or make any other changes that effect the timing of instructions you end up with performance problems because the CPU will be expecting code tuned differently and won't do anything to work around the "wrong instruction scheduling" problem.
As far as complexity goes, it's like RISC - sounds good in theory, but in practice the extra complexity is necessary for the best performance (rather than just "average" performance) and the space it consumes in silicon is mostly irrelevant compared to space consumed by cache.
rdos wrote:The only way to achieve more performance is to write applications with many threads.
Applications with many threads (or applications with at least one thread per CPU) can help for some things, but doesn't help for other things. For an example, see if you can figure out how to use many threads to speed up
Floyd–Steinberg dithering.
Basically you end up with
Amdahl's law, where parts that can't be done in parallel limit the maximum performance (or, where single-threaded performance has a huge impact on software that isn't embarrassingly parallelizable).
rdos wrote:Typical users cannot care less about compilers. They want their software to work, and to run the fastest possible. Practically nobody sold native Itanium software, so the CPU ended up executing legacy-software at a terribly low speed.
You're right in that there may not have been native versions of popular 3D games or MS Office or anything else that nobody purchasing a high-end/Itanium server would ever care about.
For high-end servers, "software" mostly means OSs, and things like database management systems, web servers and maybe some custom designed software. All of the OSs that mattered were native (including HP-UX, Windows, Linux, FreeBSD, etc), all the database management systems that mattered were native (including Oracle's and Microsoft's), all the web servers that mattered were native (IIS, Apache). Custom design software (e.g. where the company that owns the server also owns/develops/maintains the source code for their own "in-house" application/s) can be trivially recompiled.
rdos wrote:Brendan wrote:For high-end servers, support for legacy software is virtually irrelevant (it's not like the desktop space where everyone wants to be able to run ancient Windows applications).
That is not the high-volume market. The high-volume market is desktop PCs and portable PCs.
No. The high volume market is small/embedded systems, where a CPU sells for less than $10 and the CPU manufacturer is lucky to make $1 profit on each unit sold. How many little MIPS CPUs do you think you'd need to sell just to come close to the profit Intel would make from one multi-socket (Itanium or Xeon) server?
It's like saying air-craft manufacturers like Boeing should just give up because people buy more bicycles and cars than air-planes.
rdos wrote:Brendan wrote:Of course "failed" depends on your perspective. Once upon a time there were a variety of CPUs competing in the high-end server market (80x86, Alpha, Sparc, PA-RISC, etc). Itanium helped kill everything else, leaving Intel's Itanium, Intel's 80x86 and AMD's 80x86 (which can barely be called competition at all now). It's a massive success for Intel - even if they discontinue Itanium they're still laughing all the way to the bank from (almost) monopolising an entire (very lucrative) market.
That's not the way I understand it. AFAIK, Intel launched (and patented) Itanium in order to get rid of the competition once and for all. This was a big failure since practically nobody bought Itanium. Then it was AMD that extended x86 to 64 bits, and thereby made sure that they stayed on the market.
You're looking at the wrong market. Nobody used Itanium for embedded systems, smartphones, notebooks, laptops, desktops or low-end servers; because they were never intended for that in the first place. People did buy Itanium for high-end servers, especially where reliability/fault tolerance and/or scalability was needed. Extending a desktop CPU to 64-bit doesn't suddenly make reliability/fault tolerance and/or scalability features appear out of thin air; and while some of the features have found their way into chipsets for Xeon and Opteron, most of the competing high-end server CPUs (Alpha, Sparc, PA-RISC) were all dead or dying before that happened.
Here's a timeline for you to think about:
- 1986 - PA-RISC released
- 1987 - Sparc released
- 1992 - DEC Alpha released
- 1998 - Comaq purchases most of DEC, decides to phase out Alpha in favour Itanium (before any Itanium has even been released). Comaq sells intellectual property related to Alpha to Intel.
- 2001 - original Itanium (mostly just a "proof of concept") released
- 2002 - Itanium 2 released
- 2003 - first 64-bit 80x86 released
- 2004 - first 64-bit 80x86 released by Intel
- 2006 - Sparc gives up any hope of competing and becomes open-source to stay alive
- 2007 - HP discontinues PA-RISC in favour of Itanium
Rudster816 wrote:Intel wanted Itanium to be the 64 bit successor to x86. It failed in this sense, and the business aspect of things. It also did the exact opposite of what Brendan said (removing competition), because while Intel was working on the doomed Itanium architecture, AMD was developing AMD64. If Intel had done the same thing as AMD and just extended x86 to 64 bits, AMD's architecture would have inevitably failed because Intel had (and still does) way more tout than AMD. But now Intel need's a license from AMD to make chips, when it use to be that AMD just needed a license from Intel. This benefited AMD in two ways, not only did they have the cash from the license agreement, but they also had a huge lead in the performance department courtesy of Athlon64. It wasn't until Core2Duo (and the return of the P6 microarchitecture) did Intel regain the performance advantage.
The only reason AMD had any lead was because Intel's NetBurst microarchitecture sucked. AMD's performance advantage had nothing to do with 64-bit. Don't forget that when AMD first introduced 64-bit nobody really cared because Windows didn't support it anyway, and by the time Windows did support 64-bit (Vista, 2006) Intel was selling 64-bit "core" and "core2" CPUs.
Here's a timeline for you to think about:
- 1998 - Alpha decides to hide like a little school-girl just because of rumours of Itanium
- 2002 - Itanium 2 released
- 2003 - AMD creates a secret weapon to counter the Itanium threat
- 2004 - Intel walks in and takes AMD's new weapon; then continues to use Itanium in one hand and 64-bit 80x86 in the other hand to pound the daylights out of everyone including AMD
- 2006 - Sparc sees the "dual wielding" Intel goliath, wets its pants and tries to hide
- 2007 - PA-RISC commits suicide in fear of the "dual wielding" Intel goliath
Cheers,
Brendan