eekee wrote: ↑Mon Nov 25, 2024 4:08 am
I've been thinking about the hardware Unix first ran on; 32KB RAM & 3 1GB disks in the PDP-11, and wondering quite how much of modern Unix's weirdness goes back to that strange ratio.
A while ago I started a project that needed a microcontroller to drive a machine I was building. I grabbed an Arduino, tossed out their IDE/runtime, and targeted it directly with gcc and avr-libc. I was quite surprised to see so much Unix coming out of avr-libc. Then it clicked: this $5 microcontroller I'm working with is a substantial fraction of the hardware that Unix was originally created for. Pretty amazing.
Good to see another OS which isn't just another Unix clone.
I'm giving it a shot anyway. I'm still at the phase where it is a drastically over complicated clock. I'm moving ahead though. The kernel main loop now pulls jobs out of a queue and if no job is present does a hlt. I multiplexed the PIT and wrapped it with an interface that lets you create timers with arbitrary initial and reoccurring delays with a precision of 1ms. In the interest of minimizing the amount of time where interrupts are disabled the PIT ISR checks to see if it already has a pending job and if not puts one in the queue to check for expired timers then exits. In the interest of future concurrency if expired timers are found they too are submitted as jobs to run.
I broke the status bar in the console into two parts: memory info and the clock and the memory info updates more frequently than the clock. From left to right the memory info goes: size of kernel program break, available physical pages, total physical pages:
I've been looking at odd languages in the hope of speeding development
I've got the same goal. What I really want is a kernel that lets me play around with experimental APIs in user space and I'm happy to sacrifice performance and a bunch of other things to get there. I don't expect my first try at making a kernel to wind up anywhere beyond proof of concept quality, not even prototype, so even if successful at coming up with a new user space API I like I fully expect to throw the kernel away in the future anyway. Might as well embrace that.
At the moment I've got newlib and GCC 14.2's libstdc++ running in hosted mode in the kernel and I'm running GCC with C++23 as the language version. It definitely lets me move a lot faster than C. I'm traditionally a Perl programmer and lately more of a Python programmer and, unfortunately, I definitely can't move as fast with C++ as I can with those languages.
I am very very tempted to run an experiment and see how hard it would be to get Perl running in kernel space. Though while I'm still down at such a low level I don't yet see any benefit with Perl beyond std::shared_ptr, std::string, std::vector, std::list, etc. Maybe in the future.
My current task is getting ready to have the kernel be able to execute user mode processes. I've implemented the trampoline described
over here in assembly and got it running with the linker placing it at the normal place. I've built a linker script to park the trampoline only at the top 1 megabyte of the address space as talked about
here (the trampoline when stripped of debug info is less than 1KiB, which amuses me).
My current sub-task is finding the ELF section header for the trampoline so I can get the physical and virtual addresses for it and map them in the page table directory. It's been more difficult than I expected to find the section name strings but I expect to solve that soon, get the trampoline mapped, and start finding the new things that will be more difficult than I expected.
Once I got the PIT multiplexed I decided to run it at a silly frequency (5khz, I might turn it up more) to expose reentrancy issues early. Wow. Never in my life have I seen malloc() go reentrant. That gave me a good laugh.
I've already got spinlocks implemented and present in the interest of future concurrency. So going reentrant isn't catastrophic. It does a cause a deadlock but also leaves a nice easy to debug stack trace for GDB. I solve the reentrancy issue at the moment by disabling interrupts in sections of code that are common inside and outside of ISRs.
After I solve spreading my trampoline around in the address space I think I'll go back and make a class of spin lock specifically for parts of the kernel that are common inside and outside of ISRs and have that mutex assert on locking that interrupts are disabled.
This is certainly a fun hobby.