I was wondering what exactly causes ram latency, and how come it can be changed?
Jules
ram latency
The original idea of RAM is, the system triggers a certain address on the address lines of the RAM chip, and the chip gives the corresponding data content on the data lines.
When systems became faster and more complex internally, they introduced "wait states": The system gave the address, and then had to wait n cycles until the data came out.
Then they thought of "burst mode", where multiple successive RAM accesses would be handled in a "burst" of output, eliminating intermediate wait states.
These kind of protocols became more complex over time, and today there are a number of "wait state" numbers specifying the timing of a RAM chip, also called "latency". This cannot be changed!
A given RAM chip will be capable of stable operation up to a given limit, beyond which it will become unstable and, eventually, non-functional. Usually, a modern chip should have encoded its "safe" latency setting on-chip, where the BIOS can read it. What can be changed is the timing the motherboard uses, i.e. overriding what the chip says it is capable of.
This is somewhat akin to the ancient practice of using single-sided 5.25" floppies double-sided by cutting a hole at the appropriate place: It gives you more power, and many people have done so without ever experiencing any problems, but it's a hack.
That's from the top of my head. For more detailed information, check Wikipedia on DRAM.
When systems became faster and more complex internally, they introduced "wait states": The system gave the address, and then had to wait n cycles until the data came out.
Then they thought of "burst mode", where multiple successive RAM accesses would be handled in a "burst" of output, eliminating intermediate wait states.
These kind of protocols became more complex over time, and today there are a number of "wait state" numbers specifying the timing of a RAM chip, also called "latency". This cannot be changed!
A given RAM chip will be capable of stable operation up to a given limit, beyond which it will become unstable and, eventually, non-functional. Usually, a modern chip should have encoded its "safe" latency setting on-chip, where the BIOS can read it. What can be changed is the timing the motherboard uses, i.e. overriding what the chip says it is capable of.
This is somewhat akin to the ancient practice of using single-sided 5.25" floppies double-sided by cutting a hole at the appropriate place: It gives you more power, and many people have done so without ever experiencing any problems, but it's a hack.
That's from the top of my head. For more detailed information, check Wikipedia on DRAM.
Every good solution is obvious once you've found it.
Hi,
In addition to Solar's wise words..
There's also RAM access latencies that have nothing to do with the RAM itself - latencies inside the memory controller, latencies in the bus connecting the memory controller to the CPU/s and latencies inside the CPU.
If you tweak the BIOS settings to make the RAM operate 10% faster, then it doesn't effect these other latencies and you'll probably only see a 5% improvement measured at the CPU.
Of course the performance of software depends on more than just how quickly the CPU/s can access RAM - things like instruction timings, waiting for hard disks, waiting for video/GPU, etc all come into the overall performance. Making the RAM operate 10% faster might make the CPU access RAM 5% faster, and might make the software you're running 1% faster.
However, reliability quickly deteriorates - making the software you're running 1% faster will probably also make the computer 40% more likely to crash, as you need to bypass the RAM manufacturer's safety margin.
So, why do BIOS manufacturers have settings that allow the safe/sane RAM timings to be changed? I honestly don't know, but it probably has something to do with sales and marketting (idiots like playing with their knobs, so give them more knobs).
Cheers,
Brendan
In addition to Solar's wise words..
There's also RAM access latencies that have nothing to do with the RAM itself - latencies inside the memory controller, latencies in the bus connecting the memory controller to the CPU/s and latencies inside the CPU.
If you tweak the BIOS settings to make the RAM operate 10% faster, then it doesn't effect these other latencies and you'll probably only see a 5% improvement measured at the CPU.
Of course the performance of software depends on more than just how quickly the CPU/s can access RAM - things like instruction timings, waiting for hard disks, waiting for video/GPU, etc all come into the overall performance. Making the RAM operate 10% faster might make the CPU access RAM 5% faster, and might make the software you're running 1% faster.
However, reliability quickly deteriorates - making the software you're running 1% faster will probably also make the computer 40% more likely to crash, as you need to bypass the RAM manufacturer's safety margin.
So, why do BIOS manufacturers have settings that allow the safe/sane RAM timings to be changed? I honestly don't know, but it probably has something to do with sales and marketting (idiots like playing with their knobs, so give them more knobs).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
That, and the ability to bypass bogus settings. I had RAM once that was produced as 133 MHz, advertised as 133 MHz, and worked perfectly well at 133 MHz - but the BIOS recognized it as being 100 MHz. I sure was happy it allowed me to set the value manually.Brendan wrote:So, why do BIOS manufacturers have settings that allow the safe/sane RAM timings to be changed? I honestly don't know, but it probably has something to do with sales and marketting (idiots like playing with their knobs, so give them more knobs).
Every good solution is obvious once you've found it.
That actually depends on the type of RAM, and - as I said - is quite complex. You're better off looking at the two articles linked above, and follow the trail of CAS, RAS, precharge etc. from there, than to expect an exhaustive explanation in this thread.
As for "what's the point of wait states"... oversimplified: computer logic is not instantaneous. You ask for a random address (by giving power / no power to a couple of address lines) and expect the data being output by a power / no power pattern on a couple of data lines. If the RAM chip in question can do this in less than one clock cycle, you don't need a wait state. If it does take longer, you have to wait, because the data lines have not stabilized and the data is not valid yet. You are in a wait state.
Again, that's grossly oversimplified. Modern RAM is "spoken to" in a quite complex protocol.
As for "what's the point of wait states"... oversimplified: computer logic is not instantaneous. You ask for a random address (by giving power / no power to a couple of address lines) and expect the data being output by a power / no power pattern on a couple of data lines. If the RAM chip in question can do this in less than one clock cycle, you don't need a wait state. If it does take longer, you have to wait, because the data lines have not stabilized and the data is not valid yet. You are in a wait state.
Again, that's grossly oversimplified. Modern RAM is "spoken to" in a quite complex protocol.
Every good solution is obvious once you've found it.