I have abandoned Nutoak 4-7 months ago. When I did so I deleted all of its newest source code, so the only one that remained is the one on GitHub, but it's too old. Sometimes I just don't understand even myself. It could run executables (the only one that existed for it was calc.nex, shell.nex and a test executable as far as I remember), it had total of 7000 lines of code, and 100KB kernel.
Now I'm thinking about starting again. As in Nutoak, it will have it's own bootloader.
I'm starting again.
Re: I'm starting again.
Hi,
However; it's also very expensive in terms of development time. For the last few versions I've been pushing towards modularity, with the idea that (hopefully, eventually) I'll reach the stage where I'm actually happy with most of the pieces and only discard/replace modules and not the whole thing. This tactic hasn't actually worked yet, but I'm much closer than I've ever been before.
Cheers,
Brendan
I've lost count of the number of times I've done this. I find it refreshing in a way - clearing out all the old code to make way for newer/better ideas. I've even got a personal rule for this: the design of each piece must be better in some way than it's predecessor.Lukand wrote:I have abandoned Nutoak 4-7 months ago. When I did so I deleted all of its newest source code, so the only one that remained is the one on GitHub, but it's too old. Sometimes I just don't understand even myself. It could run executables (the only one that existed for it was calc.nex, shell.nex and a test executable as far as I remember), it had total of 7000 lines of code, and 100KB kernel.
Now I'm thinking about starting again. As in Nutoak, it will have it's own bootloader.
However; it's also very expensive in terms of development time. For the last few versions I've been pushing towards modularity, with the idea that (hopefully, eventually) I'll reach the stage where I'm actually happy with most of the pieces and only discard/replace modules and not the whole thing. This tactic hasn't actually worked yet, but I'm much closer than I've ever been before.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- MichaelFarthing
- Member
- Posts: 167
- Joined: Thu Mar 10, 2016 7:35 am
- Location: Lancaster, England, Disunited Kingdom
Re: I'm starting again.
It applies to many fields of endeavour:
John Wesley wrote:Once in seven years I burn all my sermons; for it is a shame if I cannot write better sermons now than I did seven years ago
Re: I'm starting again.
once i write something, usually i could do it to be 10x faster, from from lesser and cleaner code, quicker.
but i still dont delete the older sources, i always keep everything, it can be handy to scavenge the source if i need to do something.
however there is 99% chance of not redoing anything. just because i CAN - not necessary means i WILL. usually the first implementation is good to go, and i release and sell the stuff, and move on with my life and not waste my time to redo all the same thing over and over again. after all, software meant to make profit. and it would be very bad for the profit to do the same thing all over again and again. this is usually the problem with the coders, they are unable to step from 1 to 2, they stuck in a very thin perspective, in a bubble world, which in the end even results fatally bad code after all, as they border theyself out from the real knowledges and experiences.
there is exceptions, however. for example my os. i would not be able to do it better. maybe i could write a faster gui, or whatever, but i would not be able to write a significant better os. with that project i probably maxed out all of my knowledge what humanly was possible. i will never plan to design a software that is so complex again in my life. i would not rewrite it under any circumistances.
and i alreday moved forward with my life to new software projects and not software related projects, if we seen the osdev from a technical and researching/coding standpoints. from the aspects of business, i will focus on the project from 2018-2019, as this project will have an extremely long (30-40 year long) business lifespan.
but i still dont delete the older sources, i always keep everything, it can be handy to scavenge the source if i need to do something.
however there is 99% chance of not redoing anything. just because i CAN - not necessary means i WILL. usually the first implementation is good to go, and i release and sell the stuff, and move on with my life and not waste my time to redo all the same thing over and over again. after all, software meant to make profit. and it would be very bad for the profit to do the same thing all over again and again. this is usually the problem with the coders, they are unable to step from 1 to 2, they stuck in a very thin perspective, in a bubble world, which in the end even results fatally bad code after all, as they border theyself out from the real knowledges and experiences.
there is exceptions, however. for example my os. i would not be able to do it better. maybe i could write a faster gui, or whatever, but i would not be able to write a significant better os. with that project i probably maxed out all of my knowledge what humanly was possible. i will never plan to design a software that is so complex again in my life. i would not rewrite it under any circumistances.
and i alreday moved forward with my life to new software projects and not software related projects, if we seen the osdev from a technical and researching/coding standpoints. from the aspects of business, i will focus on the project from 2018-2019, as this project will have an extremely long (30-40 year long) business lifespan.
Operating system for SUBLEQ cpu architecture:
http://users.atw.hu/gerigeri/DawnOS/download.html
http://users.atw.hu/gerigeri/DawnOS/download.html
Re: I'm starting again.
@Geri: Even I can feel that I can always make my OS better as I come up with new techniques to defeat the problems to solve at hand.
If I can do that, coming from a whole country where this sort of technology has never been so bright and where nobody teaches about that in real detail, then anyone can improve. They just need to like it. The case is that my main focus are OSes because they let me reach the maximum level that programming has to offer at any level.
My latest technique is to arrange, improve my kernel, so that it's easy to add to another existing kernel, for example to Linux or ReactOS. From there, I will use my kernel embedded in the Linux or ReactOS Ring 0 kernel code (to each subsystem, and then an user-level package of my kernel libraries to log individual regular applications to disk normally but with the same intention), to log what it does step by step to a dedicated debug-only ATA hard disk (the simplest). With that I will be able to see how Linux uses paging, how it builds page tables, how it switches tasks...
So making my kernel capable of being included to another major OS to learn from it, using my kernel to generate a book (to be extracted raw from LBA sector 0), is a huge improvement on the philosophy and the code, current and new, to be improved and developed next.
If I can do this, then I will end up learning from any OS out there. My job will only be creating book-generating code across the code base, and reimplementing based on the algorithms learned from those fine OSes.
Right now I will install VirtualBox given that it's the emulator I know how to use best for running stuff like Slackware 10/12, and it allows me to use emulated standard ATA hard disks, so it will be ideal for me. I will install Slackware 10/12, or just a current version with the sources. It would probably be better to install a current Linux version so I can then install Linux From Scratch. I would have a small partition for my base system (just to build Linux From Scratch), then another swap partition, then a big partition to compile and install Linux From Scratch.
I will then start looking at how to recompile first the kernel, and how to add as much code from my kernel as possible, new and old. It will improve my code when it needs to become portable across operating systems despite being written in Assembler. I will probably need to write standard wrapper functions in C.
I will start implementing only a Ring 0 function called WriteBookFromProgram(fileHandle,message,messageSize,messageType,...) and will hold a system-wide offset for writing a debug only raw ATA hard disk (later a debug-only FAT32 partition in the debug-only ATA disk). I will use the Secondary Slave disk.
My kernel code will be outside the context of the main kernel, as in having my kernel embedded in another one, but it will mainly just inspect what this formal kernel does so that I can read the generated book (with generated source code, actual values, results from processes, ports, algorithms).
_________________________________________
_________________________________________
_________________________________________
_________________________________________
The debug disk could be as small as 4, 8, 20GB... the generated book would be updated and fully rebuilt every time we reboot the system, so in the future it will probably be better to use a FAT32 file system to save all versions (with screenshots, usable dumps generated as compilable source code, book text to be studied...) instead of just overwriting everything from LBA Sector 0.
It's probably a good strategy to debug even my own code, having a debug-only disk. I know it's there, but I won't use it as an OS developer. I will only access it to read the generated book or books for the currently or most recently running OS.
-------------------------------------------------------
So the point is that, as you can see, one can never stop improving the kernel or anything in general. At least I know that it will become extremely interesting after a few months of using the context of any OS as the base to run my own kernel (instead of starting running at UEFI for example), poking everything around and generating books to raw disk, to be dumped later to regular files.
If I can do that, coming from a whole country where this sort of technology has never been so bright and where nobody teaches about that in real detail, then anyone can improve. They just need to like it. The case is that my main focus are OSes because they let me reach the maximum level that programming has to offer at any level.
My latest technique is to arrange, improve my kernel, so that it's easy to add to another existing kernel, for example to Linux or ReactOS. From there, I will use my kernel embedded in the Linux or ReactOS Ring 0 kernel code (to each subsystem, and then an user-level package of my kernel libraries to log individual regular applications to disk normally but with the same intention), to log what it does step by step to a dedicated debug-only ATA hard disk (the simplest). With that I will be able to see how Linux uses paging, how it builds page tables, how it switches tasks...
So making my kernel capable of being included to another major OS to learn from it, using my kernel to generate a book (to be extracted raw from LBA sector 0), is a huge improvement on the philosophy and the code, current and new, to be improved and developed next.
If I can do this, then I will end up learning from any OS out there. My job will only be creating book-generating code across the code base, and reimplementing based on the algorithms learned from those fine OSes.
Right now I will install VirtualBox given that it's the emulator I know how to use best for running stuff like Slackware 10/12, and it allows me to use emulated standard ATA hard disks, so it will be ideal for me. I will install Slackware 10/12, or just a current version with the sources. It would probably be better to install a current Linux version so I can then install Linux From Scratch. I would have a small partition for my base system (just to build Linux From Scratch), then another swap partition, then a big partition to compile and install Linux From Scratch.
I will then start looking at how to recompile first the kernel, and how to add as much code from my kernel as possible, new and old. It will improve my code when it needs to become portable across operating systems despite being written in Assembler. I will probably need to write standard wrapper functions in C.
I will start implementing only a Ring 0 function called WriteBookFromProgram(fileHandle,message,messageSize,messageType,...) and will hold a system-wide offset for writing a debug only raw ATA hard disk (later a debug-only FAT32 partition in the debug-only ATA disk). I will use the Secondary Slave disk.
My kernel code will be outside the context of the main kernel, as in having my kernel embedded in another one, but it will mainly just inspect what this formal kernel does so that I can read the generated book (with generated source code, actual values, results from processes, ports, algorithms).
_________________________________________
_________________________________________
_________________________________________
_________________________________________
The debug disk could be as small as 4, 8, 20GB... the generated book would be updated and fully rebuilt every time we reboot the system, so in the future it will probably be better to use a FAT32 file system to save all versions (with screenshots, usable dumps generated as compilable source code, book text to be studied...) instead of just overwriting everything from LBA Sector 0.
It's probably a good strategy to debug even my own code, having a debug-only disk. I know it's there, but I won't use it as an OS developer. I will only access it to read the generated book or books for the currently or most recently running OS.
-------------------------------------------------------
So the point is that, as you can see, one can never stop improving the kernel or anything in general. At least I know that it will become extremely interesting after a few months of using the context of any OS as the base to run my own kernel (instead of starting running at UEFI for example), poking everything around and generating books to raw disk, to be dumped later to regular files.
YouTube:
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
Re: I'm starting again.
i am the opposite. i dont care about the kernel, or the newer technical aspects of a complex platform.~ wrote:...
in my os, the 40-45% of code belongs to the parts of the gui management, and another 40-45% of the code is actually the c compiler alone. the rest of the code consists the kernel and server-side of syscalls, hardware management, pcb, thread/process handling.
i think if someone does an os, and alreday having a working gui, kernel, file system and memory managemet, etc, and suddenly having a month of spare time, should do a good word processor or table sheet software for it, and should not start to redesign the kernel.
i personally dont think if it is a good idea to make kernel alone, as an individual unit. i dont agree with nowdays popular os-dev convention that separates the gui, the base api-s, and the kernel, as it results incohesive design, version-incompatible slow and bloatwareish resuts.
i think an operating system is more like a game console glued together with an office word processor environment and a cohesive gui. the kernel is unsignificant, and should not even be a separable focal point of an operating system.
on extremely complex imperialistic platforms, like arm, where you need tens of tousands of code lines to even have a pixel written on the screen (or millions if you want to support multiple devices), of course you need an extremely complex and separated kernel.... but only because you cant do it other ways, and not because its a better concepcion.
i very look down the platforms arm and x86 (and the other somewhat popular platforms of nowdays) and i will not do a complex kernel just because 200000 people were unable to do a proper cpu with 30 years of work.
summary, if the hardware guys doing some seriously flawed ****, its they job to fix, and not mine (as a programmer). if they refuse it, then new platform will overtake - just as x86 died and arm overtaken its markets, the arm will just as unexpectedly and rapidly die, and a new platform will overtake its positions too. from this standpoint, however, it could be a nice idea to have a separable kernel, which you can rewrite from platforms to platforms, and just put it in as a drop-in replacement, and then the gui code can be prety much unchanged, but you maybe just still have a better ride with ifdefs and a passively working function that polls the hardware calls on demand.
Operating system for SUBLEQ cpu architecture:
http://users.atw.hu/gerigeri/DawnOS/download.html
http://users.atw.hu/gerigeri/DawnOS/download.html
Re: I'm starting again.
Hi,
For example, if someone has a GUI, kernel, file system, etc that is extremely bad ("working", but slow, not supporting desired features, not having a well designed interfaces, etc); should they build a whole pile of stuff (word processor, etc) on top, even though they know everything they build on top will need to be thrown in the trash eventually (when they finally do improve the GUI, kernel, file system, etc; and when all the interfaces have to be changed and everything that depended on the old pieces no longer works)?
For this case; in my opinion; if a person can learn useful things by building "stuff that will eventually be thrown in the trash" on top, then it's worthwhile to keep going and build things on top (because this provides experience that helps them improve the lower levels, etc); but if they're not going to learn much it's a complete waste of time and they should focus on making the lower pieces good (and only worry about building stuff on top when they're convinced that "stuff on top" will be built on a solid foundation). Unfortunately; things like loss aversion are perfectly natural, and sometimes people keep going long after it's beneficial.
For example; if I had kept going with the OS project I had in the late 1990s; by now I'd have support for a lot of devices and most of the important applications (web browser, word processor, etc); and it'd all be very slow, with no security, no fault tolerance, no support for multi-CPU, no support for 64-bit/long mode; and it'd all be worthless.
Cheers,
Brendan
That depends on how good the existing code is, and how advanced the OS developer has become.Geri wrote:i think if someone does an os, and alreday having a working gui, kernel, file system and memory managemet, etc, and suddenly having a month of spare time, should do a good word processor or table sheet software for it, and should not start to redesign the kernel.
For example, if someone has a GUI, kernel, file system, etc that is extremely bad ("working", but slow, not supporting desired features, not having a well designed interfaces, etc); should they build a whole pile of stuff (word processor, etc) on top, even though they know everything they build on top will need to be thrown in the trash eventually (when they finally do improve the GUI, kernel, file system, etc; and when all the interfaces have to be changed and everything that depended on the old pieces no longer works)?
For this case; in my opinion; if a person can learn useful things by building "stuff that will eventually be thrown in the trash" on top, then it's worthwhile to keep going and build things on top (because this provides experience that helps them improve the lower levels, etc); but if they're not going to learn much it's a complete waste of time and they should focus on making the lower pieces good (and only worry about building stuff on top when they're convinced that "stuff on top" will be built on a solid foundation). Unfortunately; things like loss aversion are perfectly natural, and sometimes people keep going long after it's beneficial.
For example; if I had kept going with the OS project I had in the late 1990s; by now I'd have support for a lot of devices and most of the important applications (web browser, word processor, etc); and it'd all be very slow, with no security, no fault tolerance, no support for multi-CPU, no support for 64-bit/long mode; and it'd all be worthless.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: I'm starting again.
Functions are always to be done as packed functions, and subsystems are to be done as reusable libraries.
The superficial overall system interfaces have to be just wrappers, very thin and light.
Also, we can implement our own low level framework, a "crazy library" to add countless test things, and standard legacy code for the best cases. In this way we will always be able to make that sort of miscellaneous tests and upgrade our code.
That's why I'm trying to add my kernel to other kernels (Linux, ReactOS) and applications (VLC, GIMP, z26...). It will improve the quality of my code and APIs a little bit. At the very least it will be readily embeddable to any kernel, and to itself, with a constant structure.
For APIs, we already have enough for the external face of the system, SysV, POSIX, WinAPI, HTML5/JavaScript, C/C++ libraries, different driver subsystem implementations. We can even add them all to one same system, to one same kernel image now that we know that they all are frequent enough as to be ever present in memory, modularly or monolithically. With that we can create our internal code as we like, add more and more cases, and keep using old cases as we add totally new ones in parallel to allow for using new and old stuff with no problem.
This is how I'm managed to advance. I do my best at a moment and implement it. Later I figure a better way to do things and I add it in new functions, but I leave the old functions and structures intact, as different APIs. By the time when I have to implement a final API, I just select the best internal functions, but the API calls are nothing more than wrappers for my currently best code out of all the functions that my kernel library contains for a given task.
The superficial overall system interfaces have to be just wrappers, very thin and light.
Also, we can implement our own low level framework, a "crazy library" to add countless test things, and standard legacy code for the best cases. In this way we will always be able to make that sort of miscellaneous tests and upgrade our code.
That's why I'm trying to add my kernel to other kernels (Linux, ReactOS) and applications (VLC, GIMP, z26...). It will improve the quality of my code and APIs a little bit. At the very least it will be readily embeddable to any kernel, and to itself, with a constant structure.
For APIs, we already have enough for the external face of the system, SysV, POSIX, WinAPI, HTML5/JavaScript, C/C++ libraries, different driver subsystem implementations. We can even add them all to one same system, to one same kernel image now that we know that they all are frequent enough as to be ever present in memory, modularly or monolithically. With that we can create our internal code as we like, add more and more cases, and keep using old cases as we add totally new ones in parallel to allow for using new and old stuff with no problem.
This is how I'm managed to advance. I do my best at a moment and implement it. Later I figure a better way to do things and I add it in new functions, but I leave the old functions and structures intact, as different APIs. By the time when I have to implement a final API, I just select the best internal functions, but the API calls are nothing more than wrappers for my currently best code out of all the functions that my kernel library contains for a given task.
YouTube:
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
Now actually starting again.
Due to some personal reasons, I had to delay starting again even further. Too busy,
I started about two-three days ago and damn, I understand things so quicker than I used to. No more copy-paste, a relatively huge change compared to before.
The name I thought of it was "Quartz Operating System". Through ungeneric, no fucks I give by the way...
I started about two-three days ago and damn, I understand things so quicker than I used to. No more copy-paste, a relatively huge change compared to before.
The name I thought of it was "Quartz Operating System". Through ungeneric, no fucks I give by the way...