Future of CPUs

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Future of CPUs

Post by Owen »

berkus wrote:
skyking wrote:
berkus wrote:
RISC architectures starting from, perhaps, Commodore 64 and its MOS 6502 chip and up to TI OMAP and nVidia tegra2 boards would disagree.
Who would disagree with what? AFAIK the ARM architecture does not include integer division in the instruction set...
armv7 and up have this.

What i was talking about: the C64 has several very simple processors, each of which is very special purpose. OMAP boards have a specific purpose DSPs in addition to relatively weak CPU, same with tegra boards. They obtain fairly good general purpose performance nonetheless, rendering your point moot.
ARMv7 ARM wrote: UDIV
Encoding T1 ARMv7-R
So, its only available on ARMv7-R processors, in Thumb mode. Not ARMv7-A processors (the ones you're thinking of) - then it would say ARMv7 or ARMv7-A.

As for OMAPs: If they truly wanted the best possible general purpose performance, they would use a more powerful processor, instead of the IVA. The fact that video decoders and GPUs exist is testament to the failings of CPUs as computation devices, not to the advantage of a special purpose core in this regards.

The other reason the OMAP has dedicated DSPs? Less power consumption.
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Future of CPUs

Post by OSwhatever »

x86 will never be as ultra low power as ARM, MIPS and a myriad of others. x86 is too bloated, eats transistors and the reason Intel are still competitive in low power is that they have very skilled engineers and the absolute latest process technology. ARMs are usually done with some older process than Intel and beats Intel anway.

That was the preamble to what I really wanted to say, is why didn't Intel develop an "x86 light". In embedded world backwards compatability isn't as important as on your PC and using another CPU is easier. So you gather statistics over what instructions are mostly used and remove those that considered rare. Throw out V86, 286 modes and so on, remove segmentation since nobody really used them anyway (properly at least). Now what is the benefit of doing this? Well Intel gets to compete better, there are a lot of mature tools for x86. x86 has excellent code density. Compilers will probably emit code that already is "x86 light" since they use simpler instruction in order to enhance parallelism. The old ones can even be trapped if you want. System code has to be altered but that would be rather quick.

Intel already did this with their 80387, that was protected mode only.

http://en.wikipedia.org/wiki/Intel_80376

Intel gave that up that idea for some reason.

Pentium patent expire 2012 so if you're up for the challenge.....
arkain
Posts: 7
Joined: Fri Aug 13, 2010 1:50 pm

Re: Future of CPUs

Post by arkain »

Owen wrote:The fact that video decoders and GPUs exist is testament to the failings of CPUs as computation devices, not to the advantage of a special purpose core in this regards.
Actually, the fact that co-processors of any type exist is a testament to several facts:
1. Dedicated hardware will generally out-perform equivalent software...
2. Using dedicated hardware in conjunction with GP hardware allows for better use of the GP hardware...
3. The use of co-processors in general increases the flexibility of what can be done...

There are more points to be made, but they all point out an oversight in your argument. A GP CPU was never designed to be a solely computational device. That's why they're called "general purpose." The only reason we even have floating point instructions in a GP CPU at all is because Intel was really lazy when designing the 8087. Since the use of the floating-point co-processor suspended the instruction cycle of the CPU it was slaved to, there was really no advantage to making a separate CPU. If it had instead been designed to work asynchronously, allowing the CPU to continue executing other instructions, then it would have remained a co-processor and been as widely praised as the first "graphics accelerators" when they came out.

As a "computational device," all GP CPU's fail, but only because that wasn't the focus of their design.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Future of CPUs

Post by Owen »

arkain wrote:
Owen wrote:The fact that video decoders and GPUs exist is testament to the failings of CPUs as computation devices, not to the advantage of a special purpose core in this regards.
Actually, the fact that co-processors of any type exist is a testament to several facts:
1. Dedicated hardware will generally out-perform equivalent software...
Assuming equivalent production resources and that you are performing something in the dedicated hardware's problem domain, yes
Owen wrote:2. Using dedicated hardware in conjunction with GP hardware allows for better use of the GP hardware...
But that dedicated hardware sits idle most of the time
Owen wrote:3. The use of co-processors in general increases the flexibility of what can be done...
No it doesn't: You would have more flexibility if you applied the same resources to the CPU itself.
Owen wrote:There are more points to be made, but they all point out an oversight in your argument. A GP CPU was never designed to be a solely computational device. That's why they're called "general purpose." The only reason we even have floating point instructions in a GP CPU at all is because Intel was really lazy when designing the 8087. Since the use of the floating-point co-processor suspended the instruction cycle of the CPU it was slaved to, there was really no advantage to making a separate CPU. If it had instead been designed to work asynchronously, allowing the CPU to continue executing other instructions, then it would have remained a co-processor and been as widely praised as the first "graphics accelerators" when they came out.

As a "computational device," all GP CPU's fail, but only because that wasn't the focus of their design.
What is a CPU? It is a device for processing data; quite simply: it is a computational device designed to do anything. If you need another piece of hardware to do what you want/need to... Then it is evidently true that the CPU is failing at being general purpose.

As for the x86 floating point coprocessor: The 8087/80287/80387 all executed instructions asynchronously from the CPU. In fact, this has always been the case. On the 8086, if you issue an FDIV instruction, the 8086 will process its own instructions for the ~100 or so cycles that it takes for the FPU to perform that division, provided you don't execute another FPU instruction (which requires use of the NPU, anyway) during that time. If you do, then the processor will halt until the 8087 releases the WAIT line so that it can continue.

As for why it was not made completely independent of the 8086? For a start, the 8087 depends upon the x86 for control flow...
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Future of CPUs

Post by Combuster »

No it doesn't: You would have more flexibility if you applied the same resources to the CPU itself.
No you don't. If you would for example integrate the processing power of a blitter into a CPU, you lose performance. That is because you also have to deal with the fact that it takes more steps to do one thing and you lose a lot of time setting up the general purpose core for doing such an operation, losing all that you gained on hardwiring.

That and you lose efficiency due to centralisation and the resulting bus bottlenecks, and power drain due to the higher clock speed required. I know you well enough that you would, when given the choice, choose a plug-in graphics card over the same GPU integrated into the northbridge.

Heck, what do you think would happen when I would take your statement to the extreme, and the CPU became responsible for bitbanging the HDMI port? We don't have a dedicated controller for that since its better to integrate that into the CPU... :twisted:
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
arkain
Posts: 7
Joined: Fri Aug 13, 2010 1:50 pm

Re: Future of CPUs

Post by arkain »

Owen wrote:
arkain wrote:
Owen wrote:The fact that video decoders and GPUs exist is testament to the failings of CPUs as computation devices, not to the advantage of a special purpose core in this regards.
2. Using dedicated hardware in conjunction with GP hardware allows for better use of the GP hardware...
But that dedicated hardware sits idle most of the time
That depends on the nature of the hardware involved doesn't it? Is it so bad that my sound card sits idle most of the time? What about my network card (actually, it doesn't. It's almost always in use.)? And I dare say that my graphics card is never sitting idle unless the display has gone to sleep or I've turned the machine off. (neither of these happen very often)
Owen wrote:
arkain wrote:3. The use of co-processors in general increases the flexibility of what can be done...
No it doesn't: You would have more flexibility if you applied the same resources to the CPU itself.
Incorrect both in theory and in practice. If this were the case, then you'd be hard pressed to explain why there is such a thing as DMA... even harder pressed to explain Bus Mastering. The simple fact is that while PIO can do these things just fine, it's far faster to off-load the work onto a dedicated co-processor and let the CPU spend its time on the more general task of running applications.
Owen wrote:
arkain wrote:There are more points to be made, but they all point out an oversight in your argument. A GP CPU was never designed to be a solely computational device. That's why they're called "general purpose." ...

As a "computational device," all GP CPU's fail, but only because that wasn't the focus of their design.
What is a CPU? It is a device for processing data; quite simply: it is a computational device designed to do anything. If you need another piece of hardware to do what you want/need to... Then it is evidently true that the CPU is failing at being general purpose.
Again, incorrect. A CPU is a device for performing actions. It's entirely possible, although useless, to write a program that collects no data and produces no output. That data processing is a common part of what CPUs do is a testament to the fact that people do not like to use useless software. ](*,)

All of the examples of co-processors that have been mentioned so far can be implemented in software (and at some point, probably has been). The simple fact is that if a dedicated piece of silicon smaller than the width of a fingernail can perform a reasonably often repeated task with sufficiently greater efficiency than an equivalent software encoding, people will make a specialized processor specifically for that function.

If you still want to offer that a co-processor doesn't increase the flexibility of what's possible, then do explain why 3D applications did not appear in abundance until the invention of the GPU. My answer is simply this: 3D applications did appear, but were limited due to the large amount of time required to perform the vector calculations needed to create a single rendering. It's not that the CPU failed to do the work, and thus failed to be general purpose. It's that moving the calculations to a co-processor that was dedicated to vector calculations and able to push the result to the screen on demand freed the CPU to manage the contents of the 3D image.

If this doesn't qualify as an increase in flexibility, I can offer you other examples, like the difference between repeatedly probing i/o ports to produce a series of square waves that approximate a given frequency (Apple ][ sound programming) vs a sound card. Co-processors increase flexibility by reducing the need to manage low-level, complicated details, freeing resources to do other things.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Future of CPUs

Post by Owen »

DMA is not a coprocessor; to be a coprocessor, it must obviously process data. Neither, in all likelyhood, does your sound output count: I haven't seen in a long time a sound card which is anything besides a DMA engine glued to a DAC.

As for your graphics card sitting idle: The computational elements are. Most of the time, it is also little more than a DVI serializer/VGA DAC connected to a framebuffer. The shader & transformations don't spin up until you give them work to do.

I never said that coprocessors were useless; I said that their existance implies a failing on behalf of the capabilities of the CPU. And this is blatantly true: If your CPU could do everything your graphics card can, why would you buy a graphics card? At best you would buy another CPU!

(Of course here I am ignoring things like power constraints; an embedded device would fairly trade off flexibility for power consumption)
arkain
Posts: 7
Joined: Fri Aug 13, 2010 1:50 pm

Re: Future of CPUs

Post by arkain »

Owen wrote:DMA is not a coprocessor; to be a coprocessor, it must obviously process data. Neither, in all likelyhood, does your sound output count: I haven't seen in a long time a sound card which is anything besides a DMA engine glued to a DAC.
So, you're trying to say that the IC that responds to requests by moving large blocks of data either back and forth across memory or to and from I/O space is not "processing data"? You've certainly got an interesting idea of what "processing data" entails. I'd like to hear your definition.
Owen wrote:As for your graphics card sitting idle: The computational elements are. Most of the time, it is also little more than a DVI serializer/VGA DAC connected to a framebuffer. The shader & transformations don't spin up until you give them work to do.
Close, but no cigar. Did you forget that one of the jobs of the GPU is to accelerate 2D graphics as well? Sure, the shader, z-buffer, w-buffer, and pretty much any other element in the GPU specific to 3D is going to sleep most of the time (until my 3D screen saver kicks in), but the transformation, pixelation, and vector units are pretty much always running.
Owen wrote:I never said that coprocessors were useless; I said that their existance implies a failing on behalf of the capabilities of the CPU. And this is blatantly true: If your CPU could do everything your graphics card can, why would you buy a graphics card? At best you would buy another CPU!
I would buy a graphics card because it is dedicated to the functions required for computer graphics and can do them faster than the equivalent CPU software. In fact, that was the exact reason why I bought my first video card with a "graphics accelerator." Windows ran just fine without the acceleration provided by the S3 Virge, but with that card, my computer was 3x more responsive. The chips on the S3 card did exactly the same thing that was being done in software, only on silicon.

You're at a distinct disadvantage here in that you're speaking with someone who has designed hardware and written software. I've actually written software as a means of testing what will become hardware. It's fairly common to test a function in software before implementing it in hardware. It saves you the trouble of going through some really slow hardware debugging cycles.
skyking
Member
Member
Posts: 174
Joined: Sun Jan 06, 2008 8:41 am

Re: Future of CPUs

Post by skyking »

berkus wrote: What i was talking about: the C64 has several very simple processors, each of which is very special purpose. OMAP boards have a specific purpose DSPs in addition to relatively weak CPU, same with tegra boards. They obtain fairly good general purpose performance nonetheless, rendering your point moot.
However as long as you don't have any use of the special operations that are provided you gain no performance from these, it's just dead silicon and wasted effort from the developer.

Also a DSP need not be that special purpose that the name implies...
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Future of CPUs

Post by Combuster »

It also looks like we are all forgetting the simple fact that most equipment is not general purpose. A computer is expected to do a certain form of output and input, and as such comes with dedicated silicon since that's what it is designed and meant to do. Only for all the other things, there's the general purpose CPU.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
255255255
Posts: 1
Joined: Sat Sep 04, 2010 1:05 am

Re: Future of CPUs

Post by 255255255 »

The next 10 years will be pretty much the end of any "improvement" for atleast 40 years,

Optical Computing - May take off for scientific research in the field of calculating large prime numbers, but thats all and nothing for the PC.
Quantum Computing - May take off when trying to predict if the cat in the box is dead or alive.

**I offer no explanation for my reasoning.**

But there is one thing and that is the FPGA, they will begin to put them everywhere, first and foremost on the Graphics Card. Having pseudo-hardwired algorithms will certainly speed things up. You can already get some motherboards with an FPGA socket.

Introduce the conspiracy theory: The government of the world won't let any super technology take off until they are defended against it.

also Intel will remove all legacy 16bit crap they made 30 years ago, and eventually the 32bit aswell from the chips. And during the 40 year dry spell you can bet market shares of all the big names get saturated into a pool of newcomers, so sell now, buy later.

anyways the biggest hurdle is the material they make it out of, and mass production of a newly engineered materials(yes they have to invent this material!!) is only common for plastic, only research prototypes will be made if anything amazing is done.

also whats all this nonsense about the number of atoms in the universe, come on guys. THIS IS SERIOUS.
p.s. I don't know what I am talking about.
skyking
Member
Member
Posts: 174
Joined: Sun Jan 06, 2008 8:41 am

Re: Future of CPUs

Post by skyking »

Combuster wrote:It also looks like we are all forgetting the simple fact that most equipment is not general purpose. A computer is expected to do a certain form of output and input, and as such comes with dedicated silicon since that's what it is designed and meant to do. Only for all the other things, there's the general purpose CPU.
By general purpose I meant the purpose computers in general are used for. For many special purposes they will do very well since the production volumes will mean lower prize, consequently it's questionable if moving to 128-bit (truly, not just for marketing purposes) is worth it since nearly nobody will benefit from it.
Dario
Member
Member
Posts: 117
Joined: Sun Aug 31, 2008 12:39 pm

Re: Future of CPUs

Post by Dario »

Brendan wrote:Hi,

My predictions are:
  • more cores (with little improvement in the performance of each core)
  • wider SIMD (e.g. AVX)
  • less power consumption per core
  • more integration (Intel and AMD have already shifted the memory controller onboard, and have CPUs with built-in GPU). In the next 10 years I can imagine ethernet, disk controllers and HPET being shifted into the CPU, and eventually the RAM chips too (which would improve RAM bandwidth and latency, and allow for smaller caches).
X

I think that all of us will see nothing but silicon dies in our lifetimes. Quantum computing and similar.....not gonna happen...at least not that soon. Just like a car industry in last 100 years with combustion engines (smaller engines, lower fuel consumption and Co2 emissions but greater power...), IC industry will only perfect current technology. More cores, more functionality integrated on the chip(networking, graphic, sound,,,), lower consumption, better transistor stability, better design, smarter schedulers therefore wider pipes...

Brendan wrote: As more is included in the chip itself, Intel and AMD will become more flexible (e.g. allow for a wide variety of "cores + gpus + other devices" combinations); and 80x86 will be used in more and more mobile/portable devices.
I don't agree on that. I think that open software will bring others like ARM into a market. I see overall change in the business model which will affect companies like MS and Intel in a negative way. Customer will be important, customer will be the one that will bring decisions that will affect the market....not giants like MS. I think that 80s model is finished. OpenSource will be standard for platforms (OS, framewoks....).
____
Dario
TylerH
Member
Member
Posts: 285
Joined: Tue Apr 13, 2010 8:00 pm
Contact:

Re: Future of CPUs

Post by TylerH »

rdos
Member
Member
Posts: 3276
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future of CPUs

Post by rdos »

It's a 32-bit CPU with 48 cores. :mrgreen:
So much for Intel dropping legacy and 32-bit support. Seems like they dropped 64-bit instead in order to fit more cores on the chip.
Post Reply