How do you think CPUs will keep improving & getting faster?
- LieutenantHacker
- Member
- Posts: 69
- Joined: Sat May 04, 2013 2:24 pm
- Location: Canada
How do you think CPUs will keep improving & getting faster?
You think we've hit the limits? Intel isn't doing anything about clock speed anymore, and they keep trying to shrink their chips. No more cores are expected and pretty much everything seems to be crashing into the wall (no room for succession).
I believe it's very possible for CPUs to be even more faster than they are today and efficiently designed, but know that most likely the manufacturers won't be able to stretch it anymore due to costs and convenience.
I fear that with the limits we've been stuck in with CPU speeds over the last few years, we won't have the general-purpose computing power easily-attainable for massive-scale programs that no single CPU's of today can handle. A lot of people are going down the GPGPU road because it seems to be the only way for some massive-scale programs to work in these machines. Some massive programs are a work in progress, like this Playstation 3 emulator, which is believed to never run on current processors when improved without some serious work (GPGPU, for one).
How and when will we see better processors? How do you think they'll be able to get faster, improve microarchitecture, keep cooling optimum, and still remain conveniently accessible/useable?
I believe it's very possible for CPUs to be even more faster than they are today and efficiently designed, but know that most likely the manufacturers won't be able to stretch it anymore due to costs and convenience.
I fear that with the limits we've been stuck in with CPU speeds over the last few years, we won't have the general-purpose computing power easily-attainable for massive-scale programs that no single CPU's of today can handle. A lot of people are going down the GPGPU road because it seems to be the only way for some massive-scale programs to work in these machines. Some massive programs are a work in progress, like this Playstation 3 emulator, which is believed to never run on current processors when improved without some serious work (GPGPU, for one).
How and when will we see better processors? How do you think they'll be able to get faster, improve microarchitecture, keep cooling optimum, and still remain conveniently accessible/useable?
The desire to hack, with the ethics to code.
I'm gonna build an 8-bit computer soon, with this as reference: http://www.instructables.com/id/How-to- ... -Computer/
I'm gonna build an 8-bit computer soon, with this as reference: http://www.instructables.com/id/How-to- ... -Computer/
Re: How do you think CPUs will keep improving & getting fast
The AMD solution is to add more cores.
- max
- Member
- Posts: 616
- Joined: Mon Mar 05, 2012 11:23 am
- Libera.chat IRC: maxdev
- Location: Germany
- Contact:
Re: How do you think CPUs will keep improving & getting fast
I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
- thepowersgang
- Member
- Posts: 734
- Joined: Tue Dec 25, 2007 6:03 am
- Libera.chat IRC: thePowersGang
- Location: Perth, Western Australia
- Contact:
Re: How do you think CPUs will keep improving & getting fast
Temperature is one of the largest current limits (that and the HUGE size of caches on the die)
I saw an article a few months back that described using the techniques of old vacuum tubes applied to micro-scale electronics, which would allow faster clock speeds with less propagation delay (the mini-vacuum tubes don't have as much capacitance as FETs do). They don't even have to be in a vacuum, as the gate distance is small enough that there is next to no chance of a collision occurring)
(A link on the idea, not sure this is what I originally read - http://www.extremetech.com/extreme/1850 ... licon-fets)
I saw an article a few months back that described using the techniques of old vacuum tubes applied to micro-scale electronics, which would allow faster clock speeds with less propagation delay (the mini-vacuum tubes don't have as much capacitance as FETs do). They don't even have to be in a vacuum, as the gate distance is small enough that there is next to no chance of a collision occurring)
(A link on the idea, not sure this is what I originally read - http://www.extremetech.com/extreme/1850 ... licon-fets)
Kernel Development, It's the brain surgery of programming.
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: How do you think CPUs will keep improving & getting fast
I still think that different architectures and teaching people to use parallelism properly is key to improvements.
- iocoder
- Member
- Posts: 208
- Joined: Sun Oct 18, 2009 5:47 pm
- Libera.chat IRC: iocoder
- Location: Alexandria, Egypt | Ottawa, Canada
- Contact:
Re: How do you think CPUs will keep improving & getting fast
AFAIK when you make the chip smaller, you can make more integrated circuits out of a single silicon wafer. Consequently, the cost reduces very highly. Another reason is that small size means small delays; if the chip size is great, then signals will consume a lot of time to travel from one place on the chip to another place...max wrote:I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
Please anyone correct me if I am wrong.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: How do you think CPUs will keep improving & getting fast
Smaller means: Everything is closer together, so less light speed delays. FET gates are smaller, so lower capacitance. You can normally reduce the voltage somewhat, so lower power. The first two make your design faster; the latter means you have more power available to do stuff with.max wrote:I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
It also means more leakage (an effect which really took hold around 130-90nm), which basically means "quantum effects apply now" and electrons occasionally tunnel through transistors. This means that everything is switched on a little all the time, and so your chip is wasting power even when it isn't doing anything.
The smaller you go, the more leakage affects you (which increases your static power draw), and therefore the larger proportion of total power it becomes. This is why things have moved from clock gating (where you just turn off the clock to a portion of the design) to power gating.
They do quite often use the extra area a die shrink gives you to add more features. If you look at Intel's tick/tock strategy for example, on a tick they shrink the previous core, and then the tock is an improved core which normally uses (some of) the extra space, as well as making design imrpovements.
Re: How do you think CPUs will keep improving & getting fast
Hi,
What's happened is that, for single-thread performance, we've picked all the low hanging fruit. There's still some fruit left to pick, but it's very hard to reach. CPU manufacturers will keep trying, and will keep improving single-thread performance, but the improvements are going to be small (e.g. 5% faster than the previous generation) and nowhere near what we saw last century - e.g. performance differences from 80386 (4 MIPS) to 80486 (11 MIPS) to Pentium (188 MIPS) to Pentium III (3000 MIPS).
The "simple" solution is adding more cores. This is probably the hardest "simple" solution that's ever existed; because the traditional procedural programming model doesn't lend itself to scalable solutions (locks are hard to get right). Getting people to shift to a different programming model (e.g. the actor model) that has proven scalability advantages is the most promising way forward.
However, given that (e.g.) one of the fastest growing programming languages (python) still doesn't even have any concurrency at all (even though we've had multi-core for over 10 years now), getting people to shift to a whole new programming model is going to be a massive challenge. What I think will happen is that most programmers will continue to suck, and as the number of cores increases most programmers will just suck more. It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Cheers,
Brendan
No (yes).LieutenantHacker wrote:You think we've hit the limits?
What's happened is that, for single-thread performance, we've picked all the low hanging fruit. There's still some fruit left to pick, but it's very hard to reach. CPU manufacturers will keep trying, and will keep improving single-thread performance, but the improvements are going to be small (e.g. 5% faster than the previous generation) and nowhere near what we saw last century - e.g. performance differences from 80386 (4 MIPS) to 80486 (11 MIPS) to Pentium (188 MIPS) to Pentium III (3000 MIPS).
The "simple" solution is adding more cores. This is probably the hardest "simple" solution that's ever existed; because the traditional procedural programming model doesn't lend itself to scalable solutions (locks are hard to get right). Getting people to shift to a different programming model (e.g. the actor model) that has proven scalability advantages is the most promising way forward.
However, given that (e.g.) one of the fastest growing programming languages (python) still doesn't even have any concurrency at all (even though we've had multi-core for over 10 years now), getting people to shift to a whole new programming model is going to be a massive challenge. What I think will happen is that most programmers will continue to suck, and as the number of cores increases most programmers will just suck more. It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: How do you think CPUs will keep improving & getting fast
Or there will be compilers, that can do the most of a programmer's job.Brendan wrote:It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: How do you think CPUs will keep improving & getting fast
For a lot of the uses of Python, the concurrency approach is to run multiple instances of the program. This isn't always possible, but for a wide variety of apps it's applicable (especially, as Brendan will delight to think about, those which scale best)Brendan wrote:However, given that (e.g.) one of the fastest growing programming languages (python) still doesn't even have any concurrency at all (even though we've had multi-core for over 10 years now), getting people to shift to a whole new programming model is going to be a massive challenge. What I think will happen is that most programmers will continue to suck, and as the number of cores increases most programmers will just suck more. It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Cheers,
Brendan
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: How do you think CPUs will keep improving & getting fast
You can find 6 and 8 core processors for reasonable costs now. I think these are going to continue to grow, and it won't be long until we have 16-core desktop processors.
What would the programming model be like?
I'd like to see some kind of asynchronous parallel event model.
For example, node.js is event-based, but it's single threaded and non-blocking, and that removes a lot of OS overhead.
Here's an example of this programming model:
Node.js is inherently single threaded. When the main body finishes, it returns to an event loop that waits for the next event. What about an OS that supports light weight, short lived threads - where when events were are triggered a thread is spawned immediately, running the handler in parallel?
Spawning a new thread could be as simple as:
What would the programming model be like?
I'd like to see some kind of asynchronous parallel event model.
For example, node.js is event-based, but it's single threaded and non-blocking, and that removes a lot of OS overhead.
Here's an example of this programming model:
Code: Select all
var write_log = function(msg) {
file.open("log.txt", function(success, handle) {
if(!success) return;
file.write(handle, msg + "\n", function(success) {
file.close(handle, null);
});
});
}
Spawning a new thread could be as simple as:
Code: Select all
spawn(function() {
// I'm in another thread!
});
My OS is Perception.
Re: How do you think CPUs will keep improving & getting fast
How would you pass in parameters?MessiahAndrw wrote:Spawning a new thread could be as simple as:Code: Select all
spawn(function() { // I'm in another thread! });
Do you have access to local variables in the parent scope?
Code: Select all
int i = 0;
spawn(function() {
i++;
});
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: How do you think CPUs will keep improving & getting fast
Yes. Internally you could pass a function pointer with a pointer to the closure block, and a high level language compiler can work out the details. In C, that would translate as a void* pointer.SpyderTL wrote:How would you pass in parameters?MessiahAndrw wrote:Spawning a new thread could be as simple as:Code: Select all
spawn(function() { // I'm in another thread! });
Do you have access to local variables in the parent scope?
In the model in my head it'd be the programmer and/or compiler.Who is responsible for synchronizing access to these variables?Code: Select all
int i = 0; spawn(function() { i++; });
In an actor-based model, you could lock the actor when an event on it is triggered, essentially making it synchronous.
It's not a perfect model, just something I threw out there.
My OS is Perception.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: How do you think CPUs will keep improving & getting fast
I'm still waiting for 128-bit CPUs to come out (actually not personally because that would cause too many compatibility problems but I still don't know what's happened to that...) and also for people to actually start getting the true performance of their CPUs rather than just using all the extra power up with all the bloat that comes with modern operating systems. My mother's new Windows 7 computer (2GB RAM, 2.2GHz Dual-Core CPU) performs only marginally faster than her old XP one (1GB RAM, 2.2GHz Single-Core CPU) which had specifications exactly half of her new one. (However on the other hand my new Linux laptop (4GB RAM, 2GHz Dual-Core CPU) by far outperforms my old Linux desktop (256MB RAM, 1GHz Single-Core CPU) which was, to be fair, rather outdated and the laptop's specs are exactly 8 times higher.)
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: How do you think CPUs will keep improving & getting fast
What would you use a 128-bit CPU for? It might take some time to reach the capacity of 64-bit address spaces, and while I can see the benefits to 128-bit registers (calculating galactic distances down to the millimetre in space simulators, etc.) this could be something better handled by a 128-bit ALU extension.
My OS is Perception.