embryo wrote:But you are still imprisoned by hardware designers. Processor instructions in binary form also lock you within an arbitrary notation, expressed in ones and zeros, and every new processor introduces new notation.
That's true, but I find it easier to work with the machine instructions directly rather than having to handle another layer of complexity on top. I suffer from a mental block when it comes to working at higher levels, until it goes to the opposite extreme with natural language.
The only problem here - we should identify the required set completely. It's a biggest problem. And one point - each algorithm can be simple, but entire system can be very complex.
But most of it is just about taking instructions in from the outside and implementing them, and it may even be that all of it is about that. The rest of the problem of creating intelligence may be no different, because when we think independently and come up with new ideas, we are primarily (or perhaps doing nothing but) running programs that have been trained into us from the outside. Children are trained both deliberately and by accident in how to solve problems, and they collect a set of such skills which they can then employ to innovate in any area, though some obviously learn to do it better than others (perhaps because they are better able to learn - that seems to be the main thing that affects human intelligence, because there's nothing specific that less intelligent people aren't capable of doing if they work at it for long enough). The first step though is not to worry about the innovation side and just to focus on taking natural language instructions and finding ways to run them as program code, because it is that process that these universal rules are there for - they are not used for general thinking. Once you have programmed the machine to be able to handle all input of that kind, then you can turn your attention to telling it how to go about thinking for itself, training it to look for problems and to try to work out how they might be solved. Most people never innovate, after all, but merely become good (or bad) at running more mundane programs to do exactly the same things as everyone else.
DavidCooper wrote:The key to it all though is how to unpick the natural language instructions and keep breaking them down until every component part of an instruction can be handled by one of the universal algorithms which is already in place in the machine and ready to deal with it. As soon as all natural language instructions can be handled in this way, it will be game over for all other programming methods
Yes, but it means we should wait until real AI will be implemented. Until such point we should have no tools to simplify our work.
What I'm suggesting is that it is by doing this work that AI gets implemented, so there is no point in sitting back and waiting for it to be done before doing it. By writing an interpreter to run natural language programs, I think it will take us straight to AGI. It will lead to a way of programming where you create programs by holding a conversation with the machine while the machine guides the process by asking how each part of the program is to be broken down into smaller steps. Then when it runs, it can be run slowly and allow the human to comment on how it's going and to modify the program along the way. This may be close to what people are already doing in the highest level programming languages, of course, but the speed of writing programs is still held back enormously simply because everything has to be done through a programming language rather than normal human language.
At least it is your suggestion. But may be you have already managed to go without traditional tools and implemented your personal set of tools which works exactly as the traditional one. But is it efficient to re-implement everything?
I've built tools that allow me to work the way I want to work, but other people who are happy with the tools they use are really at no disadvantage. I thought from the start that AGI would write machine code directly and that I should do the same, but the real program is written in your head at the highest level no matter how you write your program code, and it's going to be exactly the same for AGI. AGI will convert from that directly into machine code without fiddling around at any levels in between, but there's absolutely nothing special about the machine code level other than that processors understand it (though with x86 it all gets changed again before the processor runs it anyway). Ideally programming would be done in an artificial langauge with as much ambiguity removed from it as possible, but that would be the same level as natural languages as it could be spoken as a fully practical means of ordinary communication between humans. Even so, natural langauges won't be much slower as we know well enough what the ambiguities are and when they occur in our speech, and we generally clarify what we say as we go along to remove them whenever it's necessary to spell out which meaning is intended. Within our heads though, we think in something like a language, but it lacks ambiguities. The machine will need to do the same, translating input from the outside into the unambiguous internal form, asking for clarifications about the intended meaning wherever the human doesn't provide them automatically. As soon as it can run the internalised ambiguity-free version of the natural language program with the set of universal problem solving routines, the task of understanding what the machine is being asked to do has been understood. It may not work the way the human intended, of course, but the human can then watch the program run and object to what it's doing, then the machine will present to the human the part of the program where things probably went wrong, and the rules that the program is applying will be spelt out right there in natural language form, making it easy for the human to understand what is causing the problem. Even the comments will be treated as part of the source code and the machine will object if they don't match up with the natural language instructions.
The speed gains in being able to program in natural language are obvious, but the biggest speed gains will be when you can say things like "do it the same way as you do when doing that" and the machine can work out what it is that it does in one program that you want it to do in the new one. It may then need to ask you to spell out what it is that it does in that program that you want it to do in the new one and exactly how that relates to the data it's manipulating in the new program, but it will allow you to spell out the program in the most economical way possible, anything that the machine manages to work out directly saving you the trouble of spelling things out in greater detail. The intelligence required by the machine to be able to make sense of such instructions may be possible to program into it simply by telling it how to go about interpreting things: when someone tells you to do something the same way as you do with something else, you look for similarities in the tasks and try to work out which things are being equated. To try to work out which things are being equated, you can look at previous experience and see which things have most often been equated in the past. That requires an experience database to be kept, but it would be created and maintained by running another learned program. The whole of intelligence is just the application of rules or programs, and the order in which the programs are applied is itself determined by running a program. Where it doesn't work out, it means one (or more) of those programs isn't as good as it could be, but all of them can be modified, and the intelligent system can itself modify them experimentally, guided by advice and suggestions from the outside. Well, modifying some of them could break the system and make it worse, so you'd need an undo function for that.
Clearly it's still a major task to make this a reality, but it's what I got involved in OS dev to try to do. Before that I worked in linguistics (studying the grammar of dozens of languages and trying to work out what the underlying thought structures were, plus breaking down thousands of words into the fundamental components of meaning that lie behind them) and realised that my work there was directly relevant to the development of AGI, but I knew nothing about programming at that time. Half the battle since then has been in trying to work out the right direction to take, but bit by bit it has become clearer what the actual task is and how it might be solved. Most of the work I've been doing has been going in the right direction, but it's only recently that I've been able to see clearly how actual thinking works and exactly what intelligence is.
I really need to ban myself from using the Internet though so that I can get on with building it.