DavidCooper wrote:
If I write a paragraph of instructions and then say underneath, "Do that ten times," (or say "Do this ten times," above the paragraph) the compiler will set up a count, adjust it after each loop, and stop looping when the count runs out. That is easy to understand, and just as efficient. Now, why would anyone think it's a mistake to do that? The same applies to a host of other things that could be done well in natural language, and once there's enough intelligence in the machine to cope with complexity and hide that complexity from the user, it will be much clearer and more compact than normal program source.
Permit me to point out the presence of an
anaphor ('this', which is implied but not stated to be the statement before or after the previous one - though technically the latter is a 'cataphor') in this. Anaphors, and indeed any implicit statement not fixed in syntax, requires requires the ability to interpret context - something that is really, really hard to do.
Note that it is sort of easy to
fake it, as is done by Eliza-class chatbots such as Siri, Alexa and Cortana - they keep a record of the recent statements, and in the case of those three can also pass the conversation to a huge battery of remote servers with vast databases, allowing them to infer the most likely contextual values from a series of weighted comparisons.
all they are doing is converting the speech to their best-guess textual form, calling out to the server farms which filter through a huge number of possible meanings to find a match, which is then returned to the local system. It is a highly compute-intensive and I/O intensive process, not so much from the comparisons themselves as from the volume of data being sifted through. On their own, they are just chatbots with speech synthesis, and while Eliza famously fooled a lot of people, no actual intelligence is involved - it says more about human perception of intelligence than it does about how to create an artificial one. Most of those don't actually even have a neural network running on the local unit, and don't really even need one on the 'cloud servers' except for the comparison weighting steps.
Contrary to the hype, most of 'deep learning' is just throwing massively distributed algorithms like Mapreduce at techniques dating back to the 1980s and earlier - hell, perceptrons, which were the basis of all later neural network work (though the original model proved faulty, not even managing to be Turing-equivalent), were developed in
1957, at a time when high-level languages were an experimental concept, much if not most programming was done in hand-assembled machine code, the move to
switch from vacuum tubes to transistors was just barely reaching the production stage, and the first core memory was still being tested at MIT (on the
TX-0, which was built for that purpose and only got used as a general-purpose system after it was officially retired).
I won't say that an AI based on a
linear-bounded automaton, or a collection of LBAs in parallel, is impossible, but we certainly aren't close to it now. Thing is, the human brain
isn't an LBA, and in fact our brains mostly work due to things being 'hardwired' - we don't
computed visual stimuli, we get it basically 'pre-processed' by the visual cortex before it goes to the frontal and lobe (and it goes to the amygdala first, for threat analysis, which often responds long before the 'conscious mind' has received the signal). We have a lot less awareness of, and agency over, our own actions than we think - most of what we think were our motives in things were interpretations made by the cognitive lobes after the fact. A large part of 'human intelligence' is basically internal self-misdirection - smoke and mirrors, with the magician and the audience being one and the same - which makes sense given that the reason it came to be wasn't thinking, but survival.
Which means that an Artificial Intelligence almost certainly won't resemble a natural one of the sort we are familiar with, unless it was created as an explicit simulation of one - which would basically mean throwing a lot of hardware-based neural networks at the problem (which can be implemented in ways
other than an LBA, often far more efficiently), rather than solving it.
This also relates to why training is so key to programming skill, and why even an AI-backed natural language processor would have trouble with Plain English programming -
humans are really bad at planning processes out explicitly and in detail. It isn't how our own brains work on a hardware level. Computers make bad humans, but humans also make bad computers - they way human brains work (or more often than not, only appear to) at a neurological level just isn't suited to it. It takes a lot of training and practice to get good at doing it, and in case you haven't noticed, most people who do it for any length of time go a bit crazy.
And getting back to anaphors: these work for people because our brains handle them 'in wetware', not by analyzing them in a series of steps (even in parallel). We do it well because it is something were are structured to do without conscious awareness. Computers just don't work like we do, and while it is possible to make something that simulates our brains (in principle anyway), doing that is massively compute-intensive and far less efficient and far more effort than just doing it in a way the computer can handle more readily.
(This is well-trod ground. The subject of
anaphoric macros - a very, very limited application of anaphora in programming - is something that Lispers have been studying since the 1960s, and while it can be quite powerful it is also fraught with pitfalls. Paul Graham wrote about them extensively in
On Lisp, as did Doug Hoyte in
Let Over Lambda, and while they both really, really wanted to see them get greater use, they also admitted that they were often more trouble than they were worth. And this isn't even for anaphora in general - this is for the something that has been explicitly set up for anaphoric use ahead of time, and in some ways is really only imitating natural language anaphora as a coding shortcut. I intend to use them extensively in Thelema, but I am also sort of special-casing them to make it more accessible.
Real context-sensitive anaphora? We don't have any way to deal with those, for reasons that are as true now as they were when Chomsky came up with the theory of
Language Hierarchies.)