AmA wrote:The only tool I currently use is NASM. And soon I will try to code my own assembler integrated in my OS. This way I will not depend on other dom0 OS, but only on my own software.
I have been working in my own toolchain which by now is a simple compiler. You have to expect having to work in it under a host OS like Linux or Windows, at least until it gives you total stability to not depend on other OS. However, think about that it will require you to handle filesystems, an editor, an actual assembly parser, a "screen driver", a keyboard driver, interrupt routines, among many other things. You can't make it all in one only pass if you are starting from scratch. Also, there is a moment in which purely programming in assembly language will not be an advantage but a time consuming activity that you could and should automate in more or less grade if you want to be more and more productive and keep getting smarter. It will take you several rewrites before you can migrate it to your own OS, at least if you want to make sure that your code is optimum and well tested and that your codebase will not have to be rewritten several times in a span of several inefficiently used years. It took me a year of "lazy" analysis and sporadic reading of a compiler book and actually trying to code several approaches before I could be able to come up with a truly working yet basic compiler interpreting code. I made it in Javascript because it is a very straightforward language and it has an excellent programming level for this task and a clean environment, and is very easy and fast to modify because it is interpreted (you don't need compilers, linkers or makefiles just for a simple conceptual test) and can be debugged in any system with at least Firefox. That's an advantage of a programming environment that is common and 100% standard across platforms. You will want to have it in one form or another for trying out concepts reliably. I won't be implementing it formally in a lower level language until my algorithms are optimum; doing otherwise would be duplicating programming effort and wasting too much time that could be used more naturally seeing the actual correctness of the raw algorithms and getting cleaner code for much less effort. If that is not enough, I already use my compiler for translating sources into assembly which I can readily do by hand but now I have better things to do to advance my projects. If I find errors (which by now is extremely frequent) I work to fix them. Once the code starts to look too bad, I will rewrite it and will implement it in better and more effective ways, with all of the concepts and code sections that were proven to be the best ones.
You can see most of what efforts it took me here and also the actual code and the latest version:
http://126.sytes.net/projects/realc/documentation/
You can see examples of source files in .CSM files that my compiler understands, here:
http://126.sytes.net/projects/x86/OS/LowEST/
AmA wrote:Yes, you have all the right about the code overwriting. But imagine you are reading a book(the worst case, payed for it, not downloaded it) and in the middle in the book, autor says: "But all this of course applies only for **** hardware". Then you learn that **** hardware is no more produced nor sold. Lol 300 pages lose of time... Some autors are doing this, believe me...Both things have to be avoided.
Like what books?
Also, you will usually want to buy books once you have reached a minimum level of expertise, and once you have consumed all of the basic things you can get for free and also worked trying to do a kernel. By then you will probably understand and be familiar with up to half of what common books have to tell you and they will provide you with a new half of things you needed to know. And with a basic background and guidance that you can find here and in any other developer site, what you are fearing cannot happen. In the x86 PC environment specially, anything you learn will at least give you a historic background, which is a good thing and is fully functional and applicable to this day.
I haven't seen so far anything x86-related that is of practical nature (EGA, VGA, ModeX, timers, speakers, floppy, games, graphics, making compact assembly code and basic "optimizations") that even coming from the 8088 era had been completely useless. Remember that it seems that the more you go back in the timeline of PC history, the books and tutorials seem to get richer in the basics, which you probably won't find in "modern" books in a reasonable amount of time and simplicity. Even PC repairing books of that time will give you a few code snippets that still work. All of those things are fully integrated in even the latest x86 PC so you don't lose anything.
You can also stop reading any book always if you find that you don't require it in a given moment, but there is always something to learn from a well-written book.
AmA wrote:And finally, when one is writing software to windows, he often use third party tools for optimizing, for debuging etc. But I think that if one has decided to do an OS, it will be normally to try to do it all himself, if not, I recommend the LFS project.
But think about how what you created is, for practical purposes, always below and "simpler" than the environment in which you created it and your own capabilities. Working in a sterile kernel environment will present the same barrier and will be an unnecessary delay and level of effort. Rushing would be like trying to solve a very complex math formula skipping key steps, or like trying to complete the programs you want for your OS with an undefined API or unimplemented design for the intended tasks. Why would you need to rush if you develop your own way to achieve things in a reasonable time? When you rush you demonstrate desperation because of not advancing as much as you wanted, and you must realize that studying and experimenting theory and practice in better detail is the way to go, and you will naturally become faster as you learn more things with time and a constant stream of clear information and action. You will certainly get very poor results.
Almost anything can be tested in user space and then integrated in a kernel. The most natural thing would be to program kernel-level things in a kernel environment as much as possible, and user space things in a user-space environment, and when you are just starting, the only user-space environment you have is a mature host OS like the ones everyone use daily. Since pure DOS-like operating systems without any drivers loaded have no protection, you could ease your work by using something like FreeDOS or MS-DOS as a path to kernel-level programming and tests, and DOS would be no more than a temporal shell you would use as long as there are tests that need it and as long as you don't have such a complete basic shell and environment.
It is a very common thing to try to use the goal as the path to the intended goal itself, and too often it is a too slow path of action that will yield too few results considering all of the work you still have ahead (see how much work it requires you to just boot the machine and load a "kernel", or to enter Protected Mode or Long Mode, and then you displayed a colorful animated GIF; in the end all you have done in practical terms is switching the way in which the CPU works and give it your own program to start executing and made a big effort to decode a graphic file for just showing a short animation, a very difficult thing that could be easier to do in other means, and even if you are very happy and proud you just did something very basic and there are still many other things waiting to be done, that are very difficult to create, program and make work but that have an amazingly simple actual effect in the real world and as helpful for the users (and for yourself), so one has to be very effective at reducing the number of purely failed tries as well as make sure that what you just achieved can be integrated with what you need to add next). It is better not to be too tied to existing technology as to not being able to innovate, and not too far as to not being compatible with the current world. It would be better to understand several concepts and ways to implement them (e.g, multitasking and the different ways in which it can be implemented), and then as you understand it program little related pieces that you can really test whether are working or not, until you have all of them that will make up the whole of your intended algorithm.
Another thing is that you shouldn't depend too much in optimizing tools but rather in improving your skills and algorithms. A bad algorithm will always malfunction, won't integrate well or will be too slow no matter how much optimization a tool applies to it. In this way you will truly be in charge of your programs and the optimization will just be an automation of a process you understand and could do yourself but that would be a waste of your time in a repetitive task. Using optimizing tools without knowing what they are doing is of few significance, and that is something you won't be able to do until you have tried to do it manually and have worked extensively at that for a very good period of time. Actually implementing one is just automating a task you have mastered inside out and that you are freeing yourself from repeating because a tool can do it as well as you, with the plus that you will have made it.
Read the Michael Abrash's Graphics Programming Black Book. Take your time and if possible read it from start to end. The writing style of the author and the structure of the document are more beneficial in that order. You will find many things that will improve the skills you might have at a basic level. It won't only talk you about graphics but about a very broad set of topics needed to be a better programmer. You most certainly won't be implementing all of what is discussed there but grasping things from it always helps.
http://www.gamedev.net/reference/articl ... le1698.asp
Don't forget to have more than one machine handy, preferably several machines from different generations, as much as it is possible for you. Even two identical machines are enough for a start. Otherwise you will have a very hard time working solely with emulators or having to restart your machine, if you want or need to test real hardware. And don't forget to get the best of your current OS and program your algorithms there before thinking about restarting your work machine, since it is a more natural, logical and rewarding path.