Intel produced the first microprocessor (a major leap in its own right), and gained traction in the marketplace with the 8008 and increasingly the 8080. Their meteoric rise, of course, is attributable to the 8086/8088, which IBM picked because it was, of the processors available at the time, (A) cheapest, (B) could address more than 64kB of memory (a necessity of the era), (C) had a large suite of peripheral chips (simplifying the design and making it cheaper), and (D) required minimal external glue.Rusky wrote:Those challenges have been faced by every company to break into the CPU market, including Intel (vs IBM), AMD (with AMD64), and all the ARM manufacturers. Look at the companies that license and manufacture their own ARM CPUs. They have essentially the same problem (minus creating the architecture) and are doing very well for themselves, even threatening Intel in some areas. It's definitely a possibility.
AMD started as a second source of Intel; they never had to break into the market to anywhere near the same extent. Consider that, if you sell a compatible microprocessor to an existing market-dominant party, then it is only the capabilities of your hardware, your cost, your marketing and any contracts your competitors may have taken out which restrain you. AMD64 was a simple offering of what everybody wanted (x86 compatible 64-bit CPUs) while Intel was caught with their pants down (they believed their own IA64 kool-aid, while anybody with in-depth knowledge of CPU design should have seen the disaster of a design that was a mile away)
ARM, of course, was purpose built by Acorn for their RISC PCs; it kind of fumbled along in that space for a while. When developing the Netwon, Apple went hunting for a low power, low interrupt latency, high performance processor. ARM was closest to their requirements, so they worked with Acorn to spin off the ARM division as Advanced RISC Machines, where they co developed the ARM610, which was used in the Newton. ARM cores were low power, small, and available for licensing; they were the right core at the right time, and therefore started getting designed into everything from PDAs to fridges.
Note that, in both the cases of Intel and ARM, they were architectures with cores available in the right place at the right time. Or, they happened to find their niche before anybody else did. This is a market where inertia is not to be underestimated.
The Mill will always suffer the intrinsic disadvantage that it will never be able to match Intel's fabs, nor volumes. Unless they have managed to secure very significant investments, they will not be able to afford leading edge process nodes. This will limit clock speeds, power efficiency, and therefore performance. They will be paying higher prices per CPU, because their CPUs will cover a higher silicon area and will be produced in lower volumes, plus the need to recoup high setup costs. Additionally, larger silicon areas mean lower yields.Rusky wrote:The Mill has a cheaper and faster design, which will significantly lower the cost. They estimate a 10x performance/power increase over traditional OoO superscalar designs for good, well-analyzed reasons- even early on it has a good chance of matching Intel's power/performance, if not beating it, for the markets they target. That is their business model, after all.
Some of the benefits will be available while running 'legacy software', but not all. Unmodified apps will be unable to exploit portals, for exampleRusky wrote:The software really isn't a problem either. The Mill is designed to run regular old off-the-shelf C code, including Unix kernels, so most applications will see the benefits immediately on recompile (using LLVM as the backend, so even that won't take much/any effort). While a Linux port may not take full advantage of the design, it will still benefit from it, with no real performance penalties.
I wish them luck; they have good people. I think their architecture design commits some sins (there are already something like 4 sub variants of it, for example; all upwards compatible, but of course you need to recompile your code for each variant for best performance. They are nearly committing the VLIW sin). Their best bet is to enter the supercomputer space, I think; but I'm not sure they can muster the resources to do that.Rusky wrote:Your expected scenario is still possible, but I don't expect it. I expect it to break into some niche (server farms might be easier?) and grow from there. And like I said before, if their patents get sold off, many aspects of the design could show up in a next-gen Intel or ARM or something. Some of us will likely see at least some aspects of it.
A lot of their performance comes from the Belt. This is very clever, but hard to implement into existing processors. In other words, I would not expect it to transition to other hardware. Some of their other functionality might. Things like portals are cool, but probably difficult to transfer. Additionally, much existing software is engineered around the expense of system calls; in other words, "legacy software" is written to avoid system calls, which means system calls are not generally a major optimization target for processor designers.