pranavappu007 wrote:
was thinking if the hardware and the underlying systems weren't this complex, perhaps the software engineers may be able to get a bit more power out of the existing hardware.
Hardware is so complex in part to get more power out of them. There was a time when CPUs were as fast as RAM. No caching was needed, possible, or even desired in those days. The CPU would request a byte from RAM and receive it the same cycle. Since there was no caching, there was no way to set it up, and, importantly, no way to screw it up. Reading and writing RAM worked like reading and writing any other kind of hardware.
Then CPUs got faster. RAM as well, but not as much as CPUs. CPUs started stalling on RAM cycles, and finally, a layer of caching was added. Then another. Then mainboard engineers screwed up cache implementations once too often, and Intel released their next CPU series on cards. Suddenly there were caches to configure, and if you screw it up, in the best case you loose performance, and in the worst case, no MMIO works anymore. Then CPUs hit the speed limit and started to get more and more cores on a single die, meaning that now, even a normal desktop OS has to take care of multicore (with lots and lots of cores), and caching has become an impenetrable beast. But all just to get more performance out of it.
Just one example. Pipelining, branch prediction, speculative execution, all are borne of the desire for more performance.
pranavappu007 wrote:
Programmers doesn't know low level because it's hard to understand.
Programmers don't know this because they don't have to. You really never need to know what "int a = 1;" really means on the hardware level. It is enough to know that you created an object of type "int", by the name of "a", and gave it the initial value "1". Where exactly that object is stored only starts to matter once things go south.