getting instructions on PCS
getting instructions on PCS
today we use existing code on computers to write code....how was the software for the first computers put onto the computers when there was no existing code like assemblers or compilers..?
Re:getting instructions on PCS
That was the same question we asked the teacher teaching us compilers in Uni.
This is what he told us
"The first computers were programmed using just binary codes available( 00 10 00). It was a real pain in the *ss with so much complexity involved. They were really human assemblers .
The programmers then started to write the first compiler ,in plain binary, which i think was the PASCAL one. I suspect that they developed an assembler in-between. It took them 18 year(yeah eighteen years) till the first compiler was successfully written. Now what we r doing is writing higher level complex tools using much simpler ones." ;D
This is what he told us
"The first computers were programmed using just binary codes available( 00 10 00). It was a real pain in the *ss with so much complexity involved. They were really human assemblers .
The programmers then started to write the first compiler ,in plain binary, which i think was the PASCAL one. I suspect that they developed an assembler in-between. It took them 18 year(yeah eighteen years) till the first compiler was successfully written. Now what we r doing is writing higher level complex tools using much simpler ones." ;D
Re:getting instructions on PCS
i guess this shows that things can come from nothing...eh?
Re:getting instructions on PCS
That's the way most advances like this come. For example, to make high-precision machine tools, they first had to make crude ones that could be used to make more precise ones, which could then be used to make even more precise ones, etc.
Someone I know used to work on the computers programming them when they were still the size of houses, and he said that in order to program them the literally had to flip switches that indicated 1 or 0 for binary code.
It's amazing how far computer technology has come these past decades.
Someone I know used to work on the computers programming them when they were still the size of houses, and he said that in order to program them the literally had to flip switches that indicated 1 or 0 for binary code.
It's amazing how far computer technology has come these past decades.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:getting instructions on PCS
note that if the machine instruction set allows it, it can be fairly easy to build assemblers programs. It's just about translating text into bits.
for instance you could have things like
#define LOAD_OPCODE 0x10
#define A 0 // the accumulator
#define X 1 // the index register
#define Y 2 // the base register
#define S 3 // the stack pointer
#define LOAD(reg, addr) (LOAD_OPCODE|register) (addr&0xff) (addr>>8)
so that LOAD(A,0x1234) can be assembled into "0x10 0x34 0x12" in a very simple way.
for instance you could have things like
#define LOAD_OPCODE 0x10
#define A 0 // the accumulator
#define X 1 // the index register
#define Y 2 // the base register
#define S 3 // the stack pointer
#define LOAD(reg, addr) (LOAD_OPCODE|register) (addr&0xff) (addr>>8)
so that LOAD(A,0x1234) can be assembled into "0x10 0x34 0x12" in a very simple way.
Re:getting instructions on PCS
Weird that you should ask this question.
I'm using my OS dev as a learning experience more than one with any real objective in sight. I was curious about exactly the same thing, namely how it felt to start from nothing, so I've restricted myself to just using a hex editor (It's pointless translating op codes from hex to binary, and the intel manuals give them in hex, so I'm not dropping to binary).
I can confirm that it is perfectly doable (Although I'm praying for the day I get far enough along that I've made a working assembler), but it is a serious pain in the neck. Recalculating jump offsets by hand every time you change the code is not something I recommend to anyone but the terminally curious.
Unless you've got an interest in exploring things from scratch I'd advise using the modern tools available to you .
I'm using my OS dev as a learning experience more than one with any real objective in sight. I was curious about exactly the same thing, namely how it felt to start from nothing, so I've restricted myself to just using a hex editor (It's pointless translating op codes from hex to binary, and the intel manuals give them in hex, so I'm not dropping to binary).
I can confirm that it is perfectly doable (Although I'm praying for the day I get far enough along that I've made a working assembler), but it is a serious pain in the neck. Recalculating jump offsets by hand every time you change the code is not something I recommend to anyone but the terminally curious.
Unless you've got an interest in exploring things from scratch I'd advise using the modern tools available to you .
A Brief History of Booting, part 1
A bit of history, as I understand it (you will probably want to corroborate this elsewhere, as I may have much or all of these thing wrong):
In the very earliest days of modern computing (1935-1948 or so), machines like ENIAC, the Harvard Mark IV and Colossus were 'programmmed' by wiring up breadboards, plugging in instructions in a manner similar to the old-fashioned telephone switchboards. This was, to say the least, a tedious and difficult approach, and no very general. There were no in-core programs; what little memory the machines had were used solely for data.
Around 1946, with the spread of Turing's ideas about universal computing machines, the idea of storing program instructions was discovered by at least three groups independently of each other; for various hysterical reasons, the credit for this idea has generally gone to John von Neumann, who hadn't come up with it himself but had done considerable work in developing the concept both in theory and practice. It was soon proven that random access machines (the theoretical construct underlying this approach) was equivalent to the other major models of computation (universal Turing machines, recursive functions, cellular automata, etc.), and the 'von Neumann architecture' has been the basis of 99% of all computer architectures after 1948.
Even with this advance, coding programs was still a major problem; various means of entering binary data directly were used, as were various hardware bootstrap mechanisms, such as a system that at start up would load a program off of a metal tape or a stack of punched cards (two of the earliest mass storage systems used) before the CPU itself began running. A variety of memory systems were in use before eithr ferrousmagnetic core or IC memories became the standard; some, such as drum or disk memories (used as main memory, not just mass storage) were persistent, and didn't need to have the system executive reloaded at startup, while others, such as mercury delay lines or CRT phosphors (!), needed constant refresh in the same way that IC dynamic RAM does today (vaccuum tube flip-flops, a common early way of building fast memory, didn't need refreshing, but did require power to hold their data). Since the startup sequence was long and costly, most machines were only shut down when absolutely necessary, anyway. Later, when minicomputers started appearing, some of them (e.g., the PDP-8) had a set of toggle switches with which a program could be entered, usually just a loader to interface with a paper tape reader or something similar, kept as short as possible to limit the difficulty of entering it correctly. This was still common as late as the mid-1970s, when the first microprocessor based computers (e.g., Altair, Sol) appeared but they were soon supplanted by PROM-based loaders.
In the very earliest days of modern computing (1935-1948 or so), machines like ENIAC, the Harvard Mark IV and Colossus were 'programmmed' by wiring up breadboards, plugging in instructions in a manner similar to the old-fashioned telephone switchboards. This was, to say the least, a tedious and difficult approach, and no very general. There were no in-core programs; what little memory the machines had were used solely for data.
Around 1946, with the spread of Turing's ideas about universal computing machines, the idea of storing program instructions was discovered by at least three groups independently of each other; for various hysterical reasons, the credit for this idea has generally gone to John von Neumann, who hadn't come up with it himself but had done considerable work in developing the concept both in theory and practice. It was soon proven that random access machines (the theoretical construct underlying this approach) was equivalent to the other major models of computation (universal Turing machines, recursive functions, cellular automata, etc.), and the 'von Neumann architecture' has been the basis of 99% of all computer architectures after 1948.
Even with this advance, coding programs was still a major problem; various means of entering binary data directly were used, as were various hardware bootstrap mechanisms, such as a system that at start up would load a program off of a metal tape or a stack of punched cards (two of the earliest mass storage systems used) before the CPU itself began running. A variety of memory systems were in use before eithr ferrousmagnetic core or IC memories became the standard; some, such as drum or disk memories (used as main memory, not just mass storage) were persistent, and didn't need to have the system executive reloaded at startup, while others, such as mercury delay lines or CRT phosphors (!), needed constant refresh in the same way that IC dynamic RAM does today (vaccuum tube flip-flops, a common early way of building fast memory, didn't need refreshing, but did require power to hold their data). Since the startup sequence was long and costly, most machines were only shut down when absolutely necessary, anyway. Later, when minicomputers started appearing, some of them (e.g., the PDP-8) had a set of toggle switches with which a program could be entered, usually just a loader to interface with a paper tape reader or something similar, kept as short as possible to limit the difficulty of entering it correctly. This was still common as late as the mid-1970s, when the first microprocessor based computers (e.g., Altair, Sol) appeared but they were soon supplanted by PROM-based loaders.
A Brief History of Booting, part 2
To write a new executive program, or most other programs for that matter, you would first have to write out the code by hand in octal or hex, depending on which your program loader used. This was usually then entered on a special-purpose machine such as a card punch or tape writer, using a numerical keypad; in the case of the executive program. For user programs, they were often accompanied by a card with instructions (also in hex) for the card reader itself, which indicated how to load and run the program. As software-based execution monitors developed, these were replaced with instructions for the executive. These became important as subroutine libraries were developed to handle especially common tasks; the most important job the executive handled was 'compiling' (linking, in modern terminology) the libraries into a whole program.
It was also around this time that the first interpreters appeared, what today would be seen as a limited sort of virtual machine; these were used primarily to perform complex operations, such as floating-point math, which were particuarly difficult to run on the hardware of the time, even as subroutines. These interpreters would accept numeric codes and operands as instructions, with the goal of simplifying the process of running these routines. The interpreter itself was, presumably, called as a library routine.
Even this early, it was obvious that hand-coding programs was a painful and treacherous activity, and with the advent of executive programs, also came the first symbolic assembly languages. At first, these were used mostly as notation for writing the code in, but by 1950 or so the first automatic 'program assemblers' were already appearing.
It was also common at this time to design the programs in some abstract notation such as a flowchart; this would then be used as a guide for writing the program in hex or assembly. Some of these notations were algorithmic (the earliest known of the type, Zuse's Plankalkul, was developed in 1937, but never used with a working machine IIRC), and as soon as the idea of assemblers, linkers and floating-point interpreters appeared, it was natural to consider the possiblity of automatically generating code from these notations directly. After a few false starts in the early 1950s (e.g., Flow-Matic), John Backus at IBM began a project intended to create a practical formula translation program (what was to become FORTRAN I). Since it was an experiment, and since the goal was a means of converting formulae into programs rather than a 'programming language' per se, the whole project was rather ad hoc; the control structures, in particular, were largely abstractions of those on the machine which it was originally implemented. A major objective of this was efficiency, this having been among the most serious objections against earlier 'auto-coding' systems. The fast performance of the original FORTRAN compiler's generated code was an important factor in its success, and in the acceptance of high-level languages in general, but it also led to several compromises in the language which would haunt it for years.
As for machines today, the answer is simple: cross-compilers and cross-assemblers running on existing systems are used to write a basic loader program, which is then burned into a ROM. The ROM is mapped into part of the new machine's memory. Meanwhile, the initial programs - including basic tools like an editor, compiler, assembler, debugger, etc. - which it is to load are cross-developed and saved to disk(s) (or whatever) in a format usable by the ROM. This system should then be enough to use for further development on the machine itself.
It was also around this time that the first interpreters appeared, what today would be seen as a limited sort of virtual machine; these were used primarily to perform complex operations, such as floating-point math, which were particuarly difficult to run on the hardware of the time, even as subroutines. These interpreters would accept numeric codes and operands as instructions, with the goal of simplifying the process of running these routines. The interpreter itself was, presumably, called as a library routine.
Even this early, it was obvious that hand-coding programs was a painful and treacherous activity, and with the advent of executive programs, also came the first symbolic assembly languages. At first, these were used mostly as notation for writing the code in, but by 1950 or so the first automatic 'program assemblers' were already appearing.
It was also common at this time to design the programs in some abstract notation such as a flowchart; this would then be used as a guide for writing the program in hex or assembly. Some of these notations were algorithmic (the earliest known of the type, Zuse's Plankalkul, was developed in 1937, but never used with a working machine IIRC), and as soon as the idea of assemblers, linkers and floating-point interpreters appeared, it was natural to consider the possiblity of automatically generating code from these notations directly. After a few false starts in the early 1950s (e.g., Flow-Matic), John Backus at IBM began a project intended to create a practical formula translation program (what was to become FORTRAN I). Since it was an experiment, and since the goal was a means of converting formulae into programs rather than a 'programming language' per se, the whole project was rather ad hoc; the control structures, in particular, were largely abstractions of those on the machine which it was originally implemented. A major objective of this was efficiency, this having been among the most serious objections against earlier 'auto-coding' systems. The fast performance of the original FORTRAN compiler's generated code was an important factor in its success, and in the acceptance of high-level languages in general, but it also led to several compromises in the language which would haunt it for years.
As for machines today, the answer is simple: cross-compilers and cross-assemblers running on existing systems are used to write a basic loader program, which is then burned into a ROM. The ROM is mapped into part of the new machine's memory. Meanwhile, the initial programs - including basic tools like an editor, compiler, assembler, debugger, etc. - which it is to load are cross-developed and saved to disk(s) (or whatever) in a format usable by the ROM. This system should then be enough to use for further development on the machine itself.
Re:getting instructions on PCS
it seems computers can evolve---
the matrix is coming..................................................
the matrix is coming..................................................