The Mill: a new low-power, high-performance CPU design

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Combuster »

You know what? Show us the code a compiler would output if it did branch prediction itself. Give us an example that's better than the CPU doing it.

Code: Select all

(...)
for (int i = 0; i < 50; i++)
{
    (do stuff)
}
(...)

Code: Select all

(...)
CLR r1
.loop:
CMP r1, 50-1
MOV r2, .endloop
(add n-2 parallel ops here)
MOVNBE r2, .loop
(add n-1 parallel ops here)

(do parts of stuff)

DJMP r2 ; fetching starts now
(do last few instructions of stuff to fill delay slot)
INC r1
.endloop: 
; jump will have taken effect here.
(...)
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: The Mill: a new low-power, high-performance CPU design

Post by Rusky »

And what if stuff modifies i?

You also have the problem of telling the CPU where and how much to prefetch. You could add a prefetch instruction so the CPU doesn't have to worry about whether you change r2 before the jump, which would even let you prefetch basic blocks rather than just guessing the size, but...

The Mill does that in hardware, running ahead through the chain of basic blocks in parallel with execution, to prefetch them all (the way it does this is pretty cool, you should check out the talk). It predicts which branch will be taken from each BB, and caches that, as well as storing the cache back with the executable so it can be used later. Do that from the compiler alone!
embryo

Re: The Mill: a new low-power, high-performance CPU design

Post by embryo »

Rusky wrote:What other algorithm are you possibly going to use for detecting aliasing stores?
Stores in general? It is better to analyse high level code and to find aliasing assignments. If it is impossible then additional information is required about the situation. How far potential aliases are one from another? What are the instructions in between?
User avatar
thepowersgang
Member
Member
Posts: 734
Joined: Tue Dec 25, 2007 6:03 am
Libera.chat IRC: thePowersGang
Location: Perth, Western Australia
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by thepowersgang »

There are two very large issues with the compiler doing all the work.

1. The compiler has an inherently limited view of the system state. Unless you feed in the entire system (or at least a large chunk of it), there will be unpredictable data sources. The CPU on the other hand knows what these data sources are as soon as they enter cache (which, due to prefetch, may be many cycles before they're used).

2. How would the compiler encode the variable likelyhood of a branch being taken? It would have to indicate a rather complex set of rules for the prefetch code... almost as if it was re-encoding the instruction.
Kernel Development, It's the brain surgery of programming.
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Owen »

Combuster wrote:
You know what? Show us the code a compiler would output if it did branch prediction itself. Give us an example that's better than the CPU doing it.

Code: Select all

(...)
for (int i = 0; i < 50; i++)
{
    (do stuff)
}
(...)

Code: Select all

(...)
CLR r1
.loop:
CMP r1, 50-1
MOV r2, .endloop
(add n-2 parallel ops here)
MOVNBE r2, .loop
(add n-1 parallel ops here)

(do parts of stuff)

DJMP r2 ; fetching starts now
(do last few instructions of stuff to fill delay slot)
INC r1
.endloop: 
; jump will have taken effect here.
(...)
BETTER. Trivial loop exit conditions are uninteresting and, well, trivial. Intel have been correctly predicting them for 15 years.

In fact your code is worse, because it wastes two registers and two instructions per cycle.

Please try again :-)
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Combuster »

Owen wrote:BETTER. Trivial loop exit conditions are uninteresting and, well, trivial. Intel have been correctly predicting them for 15 years.

In fact your code is worse, because it wastes two registers and two instructions per cycle.
And you think it's fair to simply strip silicon somewhere and not giving it back at an equal rate in another part of the machine?
Now that we've got enough background, let's take a look at Intel's own description of the loop detector:

The Loop Detector analyzes branches to see if they have loop behavior. Loop behavior is defined as moving in one direction (taken or not-taken) a fixed number of times interspersed with a single movement in the opposite direction. When such a branch is detected, a set of counters are allocated in the predictor such that the behavior of the program can be predicted completely accurately for larger iteration counts than typically captured by global or local history predictors.
Which means that it only works if the loop always has the same number of iterations, and always fails the first time :wink:
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: The Mill: a new low-power, high-performance CPU design

Post by Rusky »

And yet, your example is still broken.

Not to mention the Mill prefetches entire functions at a time, even from cold code, so if it did mispredict loop exit (they don't specify a predictor algorithm because different models will probably use different algos) the penalty would be only 5 cycles (which can be helped with speculative execution).

How is your example better than that?
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Combuster »

Because zero mispredictions are better than occasional mispredictions?

(Also, you just used a "no it isn't" as an argument from authority)
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: The Mill: a new low-power, high-performance CPU design

Post by Rusky »

Combuster wrote:Because zero mispredictions are better than occasional mispredictions?
That depends on the cost of a misprediction vs the cost of your implementation.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Combuster »

So at least you agree that we're all comparing apples to oranges here with this sort of stuff. :wink:
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: The Mill: a new low-power, high-performance CPU design

Post by Rusky »

My point was that this quote was wrong:
Combuster wrote:Bottom line: there's no such thing as the CPU being superior to the compiler.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Combuster »

Because you're trying to compare apples to oranges again? What is a CPU without compiler worth?
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Brendan »

Hi,
Combuster wrote:Because you're trying to compare apples to oranges again? What is a CPU without compiler worth?
A CPU without a compiler is worth about the same as a compiler without a CPU.

I think the main thing here is that compilers are good at doing the expensive static optimisations before run-time, while CPUs are good at doing the dynamic optimisations during run-time. These are mutually beneficial, not exclusive.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Bender
Member
Member
Posts: 449
Joined: Wed Aug 21, 2013 3:53 am
Libera.chat IRC: bender|
Location: Asia, Singapore

Re: The Mill: a new low-power, high-performance CPU design

Post by Bender »

A CPU without a compiler is worth about the same as a compiler without a CPU.
I haven't read this thread carefully.......
What is meant by a compiler the above sentence?
A CPU with an assembler but no compiler has a greater worth than that of a compiler that targets a non-existing CPU (Is that what you mean by a compiler with no CPU?).
I might have misunderstood your point.
A valid point against this could be that an assembler is theoretically a compiler that translates assembly language into machine code.
"In a time of universal deceit - telling the truth is a revolutionary act." -- George Orwell
(R3X Runtime VM)(CHIP8 Interpreter OS)
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: The Mill: a new low-power, high-performance CPU design

Post by Owen »

Combuster wrote:
Owen wrote:BETTER. Trivial loop exit conditions are uninteresting and, well, trivial. Intel have been correctly predicting them for 15 years.

In fact your code is worse, because it wastes two registers and two instructions per cycle.
And you think it's fair to simply strip silicon somewhere and not giving it back at an equal rate in another part of the machine?
A loop predictor does not comprise anywhere near the amount of silicon to add an additional execution unit, or an additional retirement unit.
Combuster wrote:
Now that we've got enough background, let's take a look at Intel's own description of the loop detector:

The Loop Detector analyzes branches to see if they have loop behavior. Loop behavior is defined as moving in one direction (taken or not-taken) a fixed number of times interspersed with a single movement in the opposite direction. When such a branch is detected, a set of counters are allocated in the predictor such that the behavior of the program can be predicted completely accurately for larger iteration counts than typically captured by global or local history predictors.
Which means that it only works if the loop always has the same number of iterations, and always fails the first time :wink:
"Because the Pentium M did it that way, all Intel processors do it that way"?

Nehalem and above implement Macro-op fusion, in which "dec reg, jnz backwards", "cmp reg, const, jnz backwards" pairs can be fused into one micro-op. Especially in the former case, tell me why it would be difficult for the processor to correctly predict that every time? It would surprise me entirely if Intel weren't predicting that correctly.

Now: Please tell me how you would beat branch-history predictors while using equal silicon area. Remember, if you use register or memory bandwidth to do so, the predictor wins, because those are things which are hard to expand (i.e. every increase in them consumes large amounts of silicon area) compared to the small dedicated resources predictors take.
Post Reply