Solar wrote:Perhaps it's not so much a fear of "a disruptive technology" as rather the opinion that the whole subject is a bit delusional. Time will tell.
Mel didn't approve of compilers.
“If a program can't rewrite its own code”,
he asked, “what good is it?”
Mel had written,
in hexadecimal,
the most popular computer program the company owned.
Mel loved the RPC-4000
because he could optimize his code:
that is, locate instructions on the drum
so that just as one finished its job,
the next would be just arriving at the “read head”
and available for immediate execution.
There was a program to do that job,
an “optimizing assembler”,
but Mel refused to use it.
“You never know where it's going to put things”,
he explained, “so you'd have to use separate constants”.
It was a long time before I understood that remark.
Since Mel knew the numerical value
of every operation code,
and assigned his own drum addresses,
every instruction he wrote could also be considered
a numerical constant.
He could pick up an earlier “add” instruction, say,
and multiply by it,
if it had the right numeric value.
His code was not easy for someone else to modify.
reference:
http://www.catb.org/jargon/html/story-of-mel.html
Richard Hamming -- The Art of Doing Science and Engineering, p25
In the beginning we programmed in absolute binary... Finally, a Symbolic Assembly Program was devised -- after more years than you are apt to believe during which most programmers continued their heroic absolute binary programming. At the time [the assembler] first appeared I would guess about 1% of the older programmers were interested in it -- using [assembly] was "sissy stuff", and a real programmer would not stoop to wasting machine capacity to do the assembly.
Yes! Programmers wanted no part of it, though when pressed they had to admit their old methods used more machine time in locating and fixing up errors than the [assembler] ever used. One of the main complaints was when using a symbolic system you do not know where anything was in storage -- though in the early days we supplied a mapping of symbolic to actual storage, and believe it or not they later lovingly pored over such sheets rather than realize they did not need to know that information if they stuck to operating within the system -- no! When correcting errors they preferred to do it in absolute binary.
FORTRAN was proposed by Backus and friends, and again was opposed by almost all programmers. First, it was said it could not be done. Second, if it could be done, it would be too wasteful of machine time and capacity. Third, even if it did work, no respectable programmer would use it -- it was only for sissies!
John von Neumann, when he first heard about FORTRAN in 1954, was unimpressed and asked "why would you want more than machine language?" One of von Neumann's students at Princeton recalled that graduate students were being used to hand assemble programs into binary for their early machine. This student took time out to build an assembler, but when von Neumann found out about it he was very angry, saying that it was a waste of a valuable scientific computing instrument to use it to do clerical work.
reference:
http://worrydream.com/dbx/
Other quotes comes to mind:
"640K ought to be enough for anybody."
"There is no reason for any individual to have a computer in his home."
History has shown that they’re wrong.
Inspiring figures in Unix history — Dennis Ritchie, Brian Kernighan, and Ken Thompson among others — emphasize that portability of the system by using the high‐level compiled language C has been one of Unix’s greatest strengths.
“C has become successful to an extent far surpassing any early expectations. What qualities contributed to its widespread use?
“Doubtless the success of Unix itself was the most important factor; it made the language available to hundreds of thousands of people.
Conversely, of course, Unix’s use of C and its consequent portability to a wide variety of machines was important in the system’s success. But the language’s invasion of other environments suggests more fundamental merits.
“Despite some aspects mysterious to the beginner and occasionally even to the adept, C remains a simple and small language, translatable with simple and small compilers. Its types and operations are well-grounded
in those provided by real machines, and for people used to how computers work, learning the idioms for generating time‐ and space‐efficient programs is not difficult. At the same time the language is sufficiently
abstracted from machine details that program portability can be achieved.
“Equally important, C and its central library support always remained in touch with a real environment. It was not designed in isolation to prove a point, or to serve as an example, but as a tool to write programs that did useful things; it was always meant to interact with a larger operating system, and was regarded as a tool to build larger tools. A parsimonious, pragmatic approach influenced the things that went into C:
it covers the essential needs of many programmers, but does not try to supply too much.”
https://www.bell-labs.com/usr/dmr/www/chist.pdf
The point is, time moves on.
Will NLP (Natural Language Programming) and AGI (artificial general intelligence) replace most developers?
Time will tell.
David Cooper,
What is your opinion about the message that I quoted above?