Hi,
Brendan wrote:Hi,
DavidCooper wrote:I also spent over a decade doing componential analysis work before I ever had any interest in computers and programming.
"Component analysis" made me think it's a compiler for natural language - so people could write source code in verbose and ambiguous English, rather than people needing to learn and use a conventional programming language. I'd find that an order of magnitude more credible than a mystical A.I. machine that creates software intelligently from thin air (without humans creating source code of some kind).
Componential analysis is breaking down words into the components of meaning within them - I should have clarified that as it isn't instantly obvious to anyone with no background in linguistics. You seem to have understood it fairly well anyway. Clearly a machine is still going to need to be told what kind of programs it should be writing because it will have no desires or needs of its own - it doesn't care what it writes, so it it isn't pointed in the right direction it might spend all its time doing things that are completely useless, if it does anything at all.
DavidCooper wrote:After that, it will be the A.I. that takes over the development.
And now that "order of magnitude more credible" evaporates.
If the machine can read documents and understand them, is it such a far jump for it to be able to read tutorials and manuals and work out for itself how to program something? The A.I. I'm developing can't see or hear, and it has no arms or legs so it can't move around, but what it can do is manipulate things within its own environment, and that means bytes in memory, and the tools available to it are the instructions which can run in the processor. That is its world. There is no reason why it shouldn't be able to write extentions to itself, giving it more and more capability over time, and as soon as it is competent to do that it will be able to add that capability at ridiculous speed.
What is the minimum required to enable an intelligent system to add capability to itself? Suppose you want to convert a decimal number into hex and you already have code that converts decimal to binary and binary to hex - the obvious thing to do is run one of those routines and then the other, and that's the job done. An intelligent system needs to be programmed to look at the resources available to it in the same way and to recognise that using them in particular combinations can create compound capabilities. When you ask the machine to do something that it can't do directly with an individual function, it needs to break down what it's been asked to do into the component functions required to carry it out and to solve the puzzle as to which components are required and how they should be combined. The more complex the compound task asked of the machine turns out to be, the more time it will take to find a solution that fits, but the machine can break the task down into different chunks and break those chunks down into even smaller chunks as it works its way towards a solution. We do exactly the same, but as the complexity grows we start to get lost in the morass and are likely to make mistakes which will result in bugs. The machine will not get lost. Once the solution has been worked out, the components can be written to memory (or linked to if they're already there as existing routines), and then the code can be run.
Now, when your boss tells you to write a program, he treats you as if you are a machine just like that - you write the program to fit his requirements. So who is it that actually writes the program? Is it you, or is it your boss? The answer is both - you do all the hard work solving the puzzle, wading through all the complications and working out all the possible things that might go wrong if the program isn't written correctly, while your boss did the easy bit of deciding that he needed a program to do x. When I said that A.I. takes over the development, that means that it will do all the hard work, while the boss will tell it he wants it to do x.
Imagine a "black box" containing 20 of the smartest programmers that have ever lived plus all equipment they could possibly want (including pizza!). The black box really wants to write software (those 20 programmers are getting very bored). How do you tell the black box what you want? Do you write a detailed description of what you want (a very high level form of source code), or do you just let them waste time and resources creating something you don't want?
I want it to display a photo. It researches how photos are stored as data, works out how to translate that into a form suitable for sending to the screen, and then it sends it to the screen. Did I just program it to display a photo?
Most of the time the box is idle - I haven't given it enough to do, except in a way I have. I've told it to use any spare time it has to find problems that people want solving and to try to solve them. The box looks on the internet and studies everything it finds. A lot of people want to understand the nature of reality, so maybe it could put some time into thinking about string theory. A lot of people want the world to be run better, so maybe it could put some time into looking at what politicians are saying and score them on how rational they are. A lot of other black boxes out there might be duplicating the same work, so maybe they should all speak to each other and make sure they are doing unique work. A country is being run by a mass murdering dictator, so maybe it would be a good idea to talk to all the people in positions of power in that country and coordinate a coup so that all the bastards can be killed at the same moment and the people be liberated. So, the black box needs to be told what to do. It doesn't want to do anything, but it is programmed to do what it is told to do, just so long as it doesn't do anything immoral (computational morality module uses formula based on harm minimisation).
If it's possible to create a mystical A.I. machine that creates software intelligently from thin air, you'd still have the same problem as the "black box of 20 programmers".
Suppose a company with no idea about what it should do hires some programmers. They ask the boss what he wants them to program. The boss has no answer. They sit there all day every day doing nothing. Then one day the boss says, "Hey! I want you to build me an operating system and a whole stack of applications to run on top of it." The programmers get to work. A couple of years later, the boss is happy that he has written an OS and a stack of applications, all done just by coming out with a single sentence of 21 words. Or, the programmers have written it, triggered into action by the needs of their boss.
A.I. will never have needs of its own, so it will always need to be triggered into action, although it will be able to spot needs and act to try to solve them just by observing the world around it.
Cheers,
David