MSB-OS, DavidCooper and the others programmers

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
Post Reply
manhobby
Member
Member
Posts: 111
Joined: Wed Mar 21, 2018 12:11 pm
Libera.chat IRC: esi

MSB-OS, DavidCooper and the others programmers

Post by manhobby »

The DavidCooper claims on this page ( http://www.magicschoolbook.com/computin ... oject.html ) that...

"MSB-OS lets you program at the lowest possible level, but it also makes it really easy.... I reckon it's just as easy as using a [high level] programming language..."

…but which other programmer see that is "easy" about writing code that looks like the following code?

Code: Select all

51 192 191 0 128 11 0 176 0 185 0 1 0 0 170 170 64 226
Only DavidCooper see that is "easy" about writing code that looks like the code that I quoted above?

If not, which other programmer see that is "easy" about writing code that looks like the code that I quoted above?
nullplan
Member
Member
Posts: 1767
Joined: Wed Aug 30, 2017 8:24 am

Re: MSB-OS, DavidCooper and the others programmers

Post by nullplan »

I think he meant "easy compared to just using a hex editor". And in the end he achieves nothing an assembler wouldn't do as well. He says he wrote a sudoku solver that way, and mad respect to him for that, but I wrote a sudoku solver in C and thus it'll run on anything that has electrons flowing through it, not just PCs. It could run on a voting machine. :D
Carpe diem!
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: MSB-OS, DavidCooper and the others programmers

Post by DavidCooper »

manhobby wrote:…but which other programmer see that is "easy" about writing code that looks like the following code?
It's only easy once you've learned what the numbers mean - anyone who hasn't put in the little effort needed to learn them will not have discovered that, so they can only make a judgement based on guesses rather than direct knowledge. I think most people would be able to program this way without any great difficulty, though it's clear that very few have any desire to do so. When thinking about the value of my programming system for people other than myself, I've always seen it primarily as a useful way of letting beginners interact with the processor to get a feel for how programs work deep down (with as much of the abstraction removed as possible). I don't see this as a vital step in training them up as programmers, but I do see it as a good way of helping ordinary people to understand how these machines work, stripping the apparent magic out of the box to help them see the cause-and-effect mechanics in action. The gains are educational rather than it being intended as a practical programming system for the masses - I think everyone should be encouraged to understand the workings of the devices that will soon run every aspect of the world.

________________________________________________________________
nullplan wrote:I think he meant "easy compared to just using a hex editor".
I meant that it's as easy as using a high-level programming language.
And in the end he achieves nothing an assembler wouldn't do as well.
When I've written a new bit of code, I can put the cursor on the first byte of it, press "r" to run it straight away (either directly or through a monitor program) without any delays, sometimes even correcting mistakes in the code while it's running. If it's a new chunk of OS code, I don't need to reboot the OS to run it - it's all already in place and ready to run as soon as the last byte's been written. If the new code is part of an app, I can simply return control to the app and the new bit of code can again be run immediately. When running new code through the monitor program, I can read and understand the machine code numbers that the CPU's about to crunch, so as soon as it starts to go wrong, I see the fault developing and can quickly work out how to correct it. I can escape from infinite loops by calling in the monitor program (which is tied to an interrupt routine that looks for a specific key combination) and I can then correct the fault, switch back out of monitor program mode and the code continues to run from there. Debugging is trivial, so I lose very little time hunting down faults. I can also change the alignment of code in a matter of seconds and test two or three versions a minute while hunting for the fastest alignment/arrangement of code (and real behaviour doesn't always match up to the rules in the optimisation manuals). Bear in mind too that the interrupt routines in my OS were written and edited within my OS while the OS was running - this was a self-supporting development system from near the start, and I've always worked on real hardware. It's always been about maximising my control, but the biggest gain from this way of working is that there are no delays, so I can do a lot more testing and reworking of code in a given amount of time. I am no masochist - this is a much more comfortable and efficient way of working than using assembler, but apart from producing binary blobs, it is not suited to producing useful code for any operating system other than my own.

I should also make it clear that this programming system wasn't originally designed with ordinary human users in mind, and it was a surprise that it turned out to be as easy to use as it is. From the very start, the purpose of it was to build an intelligent system on top of it which would enable natural language programming, and from then on it would only be that intelligent system that would go on programming in machine code while I would switch over to using natural language instead. I didn't want the intelligent system to be slowed down by working through any intermediate programming language - it was obvious that it should convert directly from natural language to machine code, so in order to get a feel for how that was going to work, I decided I should program directly in machine code myself. The system wasn't designed for anyone else, and nor was it designed for building anything else, but having discovered how easy and powerful direct machine code programming is, I wanted to open it up to let other people have a go at it too, although I never expected anyone to take it very far, and I would now strongly advise them not to do a lot with it. It should be regarded as a curiosity rather than a practical programming tool (even though it has proved to be a very successful tool for the one specific project which it was designed to build).
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
manhobby
Member
Member
Posts: 111
Joined: Wed Mar 21, 2018 12:11 pm
Libera.chat IRC: esi

Re: MSB-OS, DavidCooper and the others programmers

Post by manhobby »

@DavidCooper, all OS Dev Forums members,

Please, answer me the following questions:

There are other programmer that see that is "easy" about writing code that looks like the following code?

Code: Select all

51 192 191 0 128 11 0 176 0 185 0 1 0 0 170 170 64 226
If not, why?

Only DavidCooper see that is "easy" about writing code that looks like the code that I quoted above?

If not, why?

If not, which other programmer see that is "easy" about writing code that looks like the code that I quoted above?
StudlyCaps
Member
Member
Posts: 232
Joined: Mon Jul 25, 2016 6:54 pm
Location: Adelaide, Australia

Re: MSB-OS, DavidCooper and the others programmers

Post by StudlyCaps »

@DavidCooper: Are you seriously saying that implementing a hash set or string tokenization code (as an example) is equally easy in a disassembler as in a high level language?

Even using C which is a very thin abstraction this simply isn't true. Calling a function for example requires knowing calling convention, pushing to the stack, setting up register states. This is more difficult than c = function(a,b). It isn't impossible, and it can even be a superior tool for some tasks, but insisting that high level languages are only used out of laziness is a bizarre claim, and it flies in the face of common sense.

That said, I actually do think that as a live disassembler that you can jump into immediately is really cool, and actually pretty useful. I also think that as a tool to teach new programmers how system code works and how it comes together to form an operating system is also cool and good. I think a vital part of a CS education that gets skipped over too often these days is understanding how the machine actually interfaces with the system code, without abstraction. It seems to fall by the wayside in trying to teach "job skills" like 5 different types of milestone tracking charts.

@manhobby:
That it is easy is the opinion of DavidCooper. I do not agree with him and others on this forum have also disagreed with him. You need to read his opinion and others opinions and decide on your own what your opinion will be.
There will probably never be one opinion everyone agrees on though.
Do you understand?
manhobby
Member
Member
Posts: 111
Joined: Wed Mar 21, 2018 12:11 pm
Libera.chat IRC: esi

Re: MSB-OS, DavidCooper and the others programmers

Post by manhobby »

StudlyCaps wrote:@DavidCooper: Are you seriously saying that implementing a hash set or string tokenization code (as an example) is equally easy in a disassembler as in a high level language?

Even using C which is a very thin abstraction this simply isn't true. Calling a function for example requires knowing calling convention, pushing to the stack, setting up register states. This is more difficult than c = function(a,b). It isn't impossible, and it can even be a superior tool for some tasks, but insisting that high level languages are only used out of laziness is a bizarre claim, and it flies in the face of common sense.

That said, I actually do think that as a live disassembler that you can jump into immediately is really cool, and actually pretty useful. I also think that as a tool to teach new programmers how system code works and how it comes together to form an operating system is also cool and good. I think a vital part of a CS education that gets skipped over too often these days is understanding how the machine actually interfaces with the system code, without abstraction. It seems to fall by the wayside in trying to teach "job skills" like 5 different types of milestone tracking charts.

@manhobby:
That it is easy is the opinion of DavidCooper. I do not agree with him and others on this forum have also disagreed with him. You need to read his opinion and others opinions and decide on your own what your opinion will be.
There will probably never be one opinion everyone agrees on though.
Do you understand?
@StudlyCaps,

I understand.

Thanks for the your response!
User avatar
MichaelFarthing
Member
Member
Posts: 167
Joined: Thu Mar 10, 2016 7:35 am
Location: Lancaster, England, Disunited Kingdom

Re: MSB-OS, DavidCooper and the others programmers

Post by MichaelFarthing »

manhobby wrote:@DavidCooper, all OS Dev Forums members,

Please, answer me the following questions:

There are other programmer that see that is "easy" about writing code that looks like the following code?

Code: Select all

51 192 191 0 128 11 0 176 0 185 0 1 0 0 170 170 64 226
If not, why?

Only DavidCooper see that is "easy" about writing code that looks like the code that I quoted above?
I started my operating system working entirely in machine code using a (hand-coded) 512 byte boot record. This had the ability to save and reload a larger chunk f memory, display a section in a hex editor and permit change of individual bytes (though navigation in the hex editor, other than 1 byte forward, had to be added). I worked entirely in hex rather than decimal but I can confirm that I quickly became so familiar with the hex opcodes that I could write most instructions as fluently as if I were using an assembler. I had the advantage that I had previously coded both an assembler of my own and a high level language. The area that was much more difficult was jump calculations and one of my first enhancements was to provide an automation for calculating these. I know David also made this a priority in his own system, though he adopted a very different technique for this. The other requirement that I found was the need to use many 0x90 bytes - no operation - to allow for code correction without having to recalculate lots of existing jumps.

I did this as a challenge to myself to face a (fairly) similar environment to that of the first programmers and see if I could mimic their achievements - with the advantage, of course, that unlike them I knew what would be needed - a disassembler, assembler, text editor, debug tools and lots of tricks to get these to do more and more of the work. My predecessors, when they started knew this not and had to learn it from experience. I have not continued coding directly in machine code because I now use the tools that I developed then and they do that work for me. I confess that taking the series of bytes above and changing them in my head into hex I am no longer able to recognise the whole sequence of instructions, though I could still manage quite a bit

cmp al 192, mov di -32768, adc wrd ptr [bx+si] 176, ...

Doesn't make much coding sense :-)
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: MSB-OS, DavidCooper and the others programmers

Post by DavidCooper »

StudlyCaps wrote:@DavidCooper: Are you seriously saying that implementing a hash set or string tokenization code (as an example) is equally easy in a disassembler as in a high level language?
Clearly some things are slower. If I need to program a message to appear, I might write it as numbers, typically taking four key presses to input each letter, but I do this so rarely that speed isn't an issue, and anything that goes into a menu is now dealt with by a different method where I can simply edit the menu, typing the text straight into its class or object. I've never implemented a hash set, but if I need to, I'll write code to automate it. I now have code that lets me calculate floating point numbers and poke them into the code so that I don't have to spend ages working them out manually with a calculator. I have also automated parts of the generation of machine code numbers for some of the more complex instructions. Whenever something annoys me by being slow, I automate it, but I still write most of my machine code directly because it is no trouble to do so.
Calling a function for example requires knowing calling convention, pushing to the stack, setting up register states. This is more difficult than c = function(a,b).
You still have to know what a and b are and which order the function wants them in. What I do now though with complex programs is keep variables in places pointed at by BP so that I don't have to put them on the stack at all - I simply call a named function after pointing BP at the right variables so that the function can use that space instead of the stack. No frames, minimal copying, fully re-entrant, fast and simple. The deeper you go with such calls, the more values from BP are saved on the stack along with ret addresses, but hardly anything else ever needs to go on the stack - just the occasional value where there's no register free to hold it. I allocate very little space to stacks as a result.
It isn't impossible, and it can even be a superior tool for some tasks, but insisting that high level languages are only used out of laziness is a bizarre claim, and it flies in the face of common sense.
I don't think I've ever called it lazy - I consider my way of working to be the lazy one because I find it more comfortable to work with. Abstractions make me feel ill if they work in arbitrary ways that don't conform to natural thinking, so I prefer to work with things directly where they make immediate sense. Where something genuinely needs automating to make it easier to understand, I'm all in favour of automating it, and indeed, my aim is to automate everything to the point where complete beginners who are good at thinking logically can write faultless complex programs just by holding a conversation with a machine, but importantly, that kind of abstraction is all natural abstraction fitting in with the way people already think rather than forcing them to think in unnatural ways - things like tangles of mathematical notation are not natural, and I dislike any kind of programming system that forces people to learn to produce that kind of mess (although I have nothing against it being used by those who prefer to work that way). The main reason for me not wanting to automate things in conventional ways now though is that I want to automate them at a much higher level, and any work done to automate things in the meantime at an intermediate level will be rendered obsolete later, so it's only worth doing if it speeds up the production of the higher-level automation (that will replace it) more than it delays that production.
That said, I actually do think that as a live disassembler that you can jump into immediately is really cool, and actually pretty useful. I also think that as a tool to teach new programmers how system code works and how it comes together to form an operating system is also cool and good. I think a vital part of a CS education that gets skipped over too often these days is understanding how the machine actually interfaces with the system code, without abstraction. It seems to fall by the wayside in trying to teach "job skills" like 5 different types of milestone tracking charts.
Thanks for the positive response. Unfortunately, I still haven't got my OS into the right form for it being used seriously in education - there's a lot of redesign needed so that everything's done properly. I broke all the rules from the start because I had no knowledge of how things are normally done and no access to any such knowledge either, so there's a danger that it could teach some very bad habits. I'll get it all fixed some day, but don't have time to work on that at the moment - I need to continue building better tools first so that I can tidy up the mess in a fraction of the time it would take if I tried to do it now.

_____________________________________________________________________

MichaelFarthing wrote:The area that was much more difficult was jump calculations and one of my first enhancements was to provide an automation for calculating these. I know David also made this a priority in his own system, though he adopted a very different technique for this. The other requirement that I found was the need to use many 0x90 bytes - no operation - to allow for code correction without having to recalculate lots of existing jumps.
I automated 32-bit jumps right from the start using the indexing system, but for a long time I had to count out all the short jump distances every time and mend then all every time I moved code around (to insert new bits unto a routine or remove obsolete parts from it), but I later added an extra index to handle these short jumps too, though only storing information in it temporarily so as not to waste space on anything other than the positions of those jump instructions within 128 bytes of the location where I wanted to open up or close a gap. (Some instructions that aren't short jumps typically end up in the index too when doing this, but it's easy enough to identify them and remove them from it.)
(51 192 191 0 128 11 0 176 0 185 0 1 0 0 170 170 64 226 ...)

cmp al 192, mov di -32768, adc wrd ptr [bx+si] 176, ...

Doesn't make much coding sense :-)
I can't read code in hex, so I'd make similar mistakes attempting to make sense of a bit of code written that way. The code here starts with an XOR that sets EAX to 0. Then DI is loaded with the first byte of screen memory (the text screen used on boot). AL is already 0, so the instruction that sets it to 0 here is completely unnecessary. ECX is set to 256 for use as a count. AL is then posted to the screen twice (by the two 170s), then EAX in incremented (by the 64), and 226 is a loop instruction. The jump size that followed it must have been 251. That routine sends the values 0 to 255 to the screen to show how they appear as char.s in text screen mode, and the colours change too as the same value is used for both the char and the colour byte.
Last edited by DavidCooper on Tue Jan 01, 2019 11:04 am, edited 1 time in total.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
MichaelFarthing
Member
Member
Posts: 167
Joined: Thu Mar 10, 2016 7:35 am
Location: Lancaster, England, Disunited Kingdom

Re: MSB-OS, DavidCooper and the others programmers

Post by MichaelFarthing »

Umm yes. I managed 51 = 0x3c instead of 0x33. It then all goes pear-shaped - the change from odd to even changes the instruction address size and the number of bytes therefore required. I'm clearly out of practice - but sorry David - I don't regret it!
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: MSB-OS, DavidCooper and the others programmers

Post by DavidCooper »

MichaelFarthing wrote:I'm clearly out of practice - but sorry David - I don't regret it!
There's nothing for you to regret - I won't be working this way for much longer either, and I have no wish to try to repeat it all with ARM or any Itanium-style nightmare that might appear next. It's been fun, but it really is time to move on.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
StudlyCaps
Member
Member
Posts: 232
Joined: Mon Jul 25, 2016 6:54 pm
Location: Adelaide, Australia

Re: MSB-OS, DavidCooper and the others programmers

Post by StudlyCaps »

DavidCooper wrote:... though it's clear that very few have any desire to do so.
I did make the assumption that this was implying a level of laziness, though maybe I misinterpreted. People have no desire to learn to code in assembly (or machine code as the case may be) because programming at higher levels of abstraction is simply faster, it's also more readable, and less error prone. You need to parse and hold less information in your brain to accomplish the same thing.

Really I think it's about goals, and after reading your posts here I think I understand more what yours are. My goals are different though so I think that using a more efficient tool, even if it isn't perfect, is the way to go for me. I respect what you're doing though, I guess I get a bit snippy when someone seems to be touting their esoteric coding methodology as the best way, that other simply don't understand.
manhobby
Member
Member
Posts: 111
Joined: Wed Mar 21, 2018 12:11 pm
Libera.chat IRC: esi

Re: MSB-OS, DavidCooper and the others programmers

Post by manhobby »

StudlyCaps wrote:
DavidCooper wrote:... though it's clear that very few have any desire to do so.
I did make the assumption that this was implying a level of laziness, though maybe I misinterpreted. People have no desire to learn to code in assembly (or machine code as the case may be) because programming at higher levels of abstraction is simply faster, it's also more readable, and less error prone. You need to parse and hold less information in your brain to accomplish the same thing.

Really I think it's about goals, and after reading your posts here I think I understand more what yours are. My goals are different though so I think that using a more efficient tool, even if it isn't perfect, is the way to go for me. I respect what you're doing though, I guess I get a bit snippy when someone seems to be touting their esoteric coding methodology as the best way, that other simply don't understand.
@StudlyCaps,

You said that people have no desire to learn to code in assembly (or machine code as the case may be) because programming at higher levels of abstraction is simply faster, it's also more readable, and less error prone and that you need to parse and hold less information in your brain to accomplish the same thing.

According the book of Andrew Tanenbaum, STRUCTURED COMPUTER ORGANIZATION, most people can remember that the abbreviations for add, subtract, multiply, and divide are ADD, SUB, MUL, and DIV, but few can remember the corresponding numerical values the machine uses:

"The reason that people use assembly language, as opposed to programming in machine language (in hexadecimal), is that it is much easier to program in assembly language. The use of symbolic names and symbolic addresses instead of binary or octal ones makes an enormous difference. Most people can remember that the abbreviations for add, subtract, multiply, and divide are ADD, SUB, MUL, and DIV, but few can remember the corresponding numerical values the machine uses.

The assembly language programmer need only remember the symbolic names because the assembler translates them to the machine instructions. The same remarks apply to addresses. The assembly language programmer can give symbolic names to memory locations and have the assembler worry about supplying the correct numerical values. The machine language programmer must always work with the numerical values of the addresses. As a consequence, no one programs in machine language today, although people did so decades ago, before assemblers had been invented".

It is truth that most people can remember that the abbreviations for add, subtract, multiply, and divide are ADD, SUB, MUL, and DIV, but few can remember the corresponding numerical values the machine uses?
StudlyCaps
Member
Member
Posts: 232
Joined: Mon Jul 25, 2016 6:54 pm
Location: Adelaide, Australia

Re: MSB-OS, DavidCooper and the others programmers

Post by StudlyCaps »

manhobby wrote:It is truth that most people can remember that the abbreviations for add, subtract, multiply, and divide are ADD, SUB, MUL, and DIV, but few can remember the corresponding numerical values the machine uses?
As I said, different people will have different opinions. I will give my opinion though. Tanenbaum is correct that it is easier to remember abbreviated versions of normal words than numbers, because language is processed by a special part of the brain. Numbers must be remembered simply as disconnected pieces of data, whereas the abbreviations are linked to words with specific meanings to us. The human brain is better at remembering information which is linked to other information.

The abbreviations used in assembly language are correctly called "mnemonics". Mnemonics are patterns of words or letters which help people to remember something.


In the past people wrote computer programs by directly entering numbers into computers, this took a long time. To make the process more efficient software developers would write the programs on paper using a pen and then a typewriter operator would type the program into the computer. This meant that the higher paid developer could spend more time designing programs.

To make the developers job easier, instead of writing numbers on the paper, they started writing abbreviated versions of words which represented each opcode, they called the abbreviations mnemonics. The developer would write the program on paper using these mnemonics and the typewriter operator would look up the corresponding numbers and type them into the computer. This was more efficient because the more expensive developer would not need to spend time looking up the correct numbers for the opcodes.

When computers became faster and storage cheaper, the job of the typist was replaced. Developers could sit at a terminal themselves and type text into computers and the text could be stored, copied and edited. It was a simple step then to write a program which would take the same mnemonics and automatically turn them into opcodes. It was more efficient on an old computer to have short commands, all the same length, all in capital letters, so mnemonics as we know them today became ingrained in computer culture.
manhobby
Member
Member
Posts: 111
Joined: Wed Mar 21, 2018 12:11 pm
Libera.chat IRC: esi

Re: MSB-OS, DavidCooper and the others programmers

Post by manhobby »

StudlyCaps wrote:
manhobby wrote:It is truth that most people can remember that the abbreviations for add, subtract, multiply, and divide are ADD, SUB, MUL, and DIV, but few can remember the corresponding numerical values the machine uses?
As I said, different people will have different opinions. I will give my opinion though. Tanenbaum is correct that it is easier to remember abbreviated versions of normal words than numbers, because language is processed by a special part of the brain. Numbers must be remembered simply as disconnected pieces of data, whereas the abbreviations are linked to words with specific meanings to us. The human brain is better at remembering information which is linked to other information.

The abbreviations used in assembly language are correctly called "mnemonics". Mnemonics are patterns of words or letters which help people to remember something.


In the past people wrote computer programs by directly entering numbers into computers, this took a long time. To make the process more efficient software developers would write the programs on paper using a pen and then a typewriter operator would type the program into the computer. This meant that the higher paid developer could spend more time designing programs.

To make the developers job easier, instead of writing numbers on the paper, they started writing abbreviated versions of words which represented each opcode, they called the abbreviations mnemonics. The developer would write the program on paper using these mnemonics and the typewriter operator would look up the corresponding numbers and type them into the computer. This was more efficient because the more expensive developer would not need to spend time looking up the correct numbers for the opcodes.

When computers became faster and storage cheaper, the job of the typist was replaced. Developers could sit at a terminal themselves and type text into computers and the text could be stored, copied and edited. It was a simple step then to write a program which would take the same mnemonics and automatically turn them into opcodes. It was more efficient on an old computer to have short commands, all the same length, all in capital letters, so mnemonics as we know them today became ingrained in computer culture.
@StudlyCaps,

Thanks for the your response!
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: MSB-OS, DavidCooper and the others programmers

Post by eekee »

manhobby wrote:…but which other programmer see that is "easy" about writing code that looks like the following code?

Code: Select all

51 192 191 0 128 11 0 176 0 185 0 1 0 0 170 170 64 226
Hahahaha! Not the first time I've heard of (or actually heard) a programmer making a claim as crazy as this. The thing is, it's true in a way. Most programming tasks are easy if you know how, but it's learning how that's the hard part. Sometimes it's mind-wrenchingly hard!
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Post Reply