Re: x86 going away
Posted: Tue Jul 28, 2020 9:36 am
Oh come on you guys! I'm not even bothering to read most of page 2 of this thread. Apologies to those with balanced opinions but I've been looking into this for ages and seeing all the paranoia again is exhausting! It doesn't matter if many are locked down, so long as some are not and we can afford them. Have a look at PostmarketOS, see how many phones they get their fully open Linux onto. Sure, it's not the majority of phones, but many of them were very widely available.
I'm planning on diversity, myself; starting with the PC because I know stuff about it, then if my poor broken brain can understand the drivers in PostmarketOS, my old phone. If not, there are thousands of alternatives, probably literally. One of my minor goals is to bitbang video output from a microcontroller. If it works, that's my GPU (and maybe I could squeeze in audio too), then I get a powerful microcontroller for CPU, some much smaller microcontrollers for other IO, and there's my computer for my OS. When I say "microcontroller," I'm thinking of a device designed to run "bare-metal" code.
Developing a compiler as I am, and not striving for the highest efficiency, I don't have to worry about most of the junk of x86 or any other architecture; none of which are simple any more. The compiler for Plain English Programming emits something like 25 instructions, only. I'm sure many Forths are that simple too. If a compiler can be that simple even on x86, why should I care which architecture I pick?
Re. RISC v. CISC: most compilers emitted few instructions in the past. If they emit more now, it's only to optimize cache utilization. I remember when I heard the news that they realised using extra instructions would help cache utilization. I think it was mid-late 00s. Earlier, for the Pentium, Intel looked at what compilers actually emitted and optimized those instructions ahead of others, so it wasn't even a good idea to use many different instructions.
I'm planning on diversity, myself; starting with the PC because I know stuff about it, then if my poor broken brain can understand the drivers in PostmarketOS, my old phone. If not, there are thousands of alternatives, probably literally. One of my minor goals is to bitbang video output from a microcontroller. If it works, that's my GPU (and maybe I could squeeze in audio too), then I get a powerful microcontroller for CPU, some much smaller microcontrollers for other IO, and there's my computer for my OS. When I say "microcontroller," I'm thinking of a device designed to run "bare-metal" code.
Developing a compiler as I am, and not striving for the highest efficiency, I don't have to worry about most of the junk of x86 or any other architecture; none of which are simple any more. The compiler for Plain English Programming emits something like 25 instructions, only. I'm sure many Forths are that simple too. If a compiler can be that simple even on x86, why should I care which architecture I pick?
Re. RISC v. CISC: most compilers emitted few instructions in the past. If they emit more now, it's only to optimize cache utilization. I remember when I heard the news that they realised using extra instructions would help cache utilization. I think it was mid-late 00s. Earlier, for the Pentium, Intel looked at what compilers actually emitted and optimized those instructions ahead of others, so it wasn't even a good idea to use many different instructions.